Mar  1 04:00:45 np0005634532 kernel: Linux version 5.14.0-686.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026
Mar  1 04:00:45 np0005634532 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Mar  1 04:00:45 np0005634532 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Mar  1 04:00:45 np0005634532 kernel: BIOS-provided physical RAM map:
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Mar  1 04:00:45 np0005634532 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Mar  1 04:00:45 np0005634532 kernel: NX (Execute Disable) protection: active
Mar  1 04:00:45 np0005634532 kernel: APIC: Static calls initialized
Mar  1 04:00:45 np0005634532 kernel: SMBIOS 2.8 present.
Mar  1 04:00:45 np0005634532 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Mar  1 04:00:45 np0005634532 kernel: Hypervisor detected: KVM
Mar  1 04:00:45 np0005634532 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Mar  1 04:00:45 np0005634532 kernel: kvm-clock: using sched offset of 18526595117 cycles
Mar  1 04:00:45 np0005634532 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Mar  1 04:00:45 np0005634532 kernel: tsc: Detected 2800.000 MHz processor
Mar  1 04:00:45 np0005634532 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Mar  1 04:00:45 np0005634532 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Mar  1 04:00:45 np0005634532 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Mar  1 04:00:45 np0005634532 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Mar  1 04:00:45 np0005634532 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Mar  1 04:00:45 np0005634532 kernel: Using GB pages for direct mapping
Mar  1 04:00:45 np0005634532 kernel: RAMDISK: [mem 0x1b6ca000-0x29b5cfff]
Mar  1 04:00:45 np0005634532 kernel: ACPI: Early table checksum verification disabled
Mar  1 04:00:45 np0005634532 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Mar  1 04:00:45 np0005634532 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Mar  1 04:00:45 np0005634532 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Mar  1 04:00:45 np0005634532 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Mar  1 04:00:45 np0005634532 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Mar  1 04:00:45 np0005634532 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Mar  1 04:00:45 np0005634532 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Mar  1 04:00:45 np0005634532 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Mar  1 04:00:45 np0005634532 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Mar  1 04:00:45 np0005634532 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Mar  1 04:00:45 np0005634532 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Mar  1 04:00:45 np0005634532 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Mar  1 04:00:45 np0005634532 kernel: No NUMA configuration found
Mar  1 04:00:45 np0005634532 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Mar  1 04:00:45 np0005634532 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Mar  1 04:00:45 np0005634532 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Mar  1 04:00:45 np0005634532 kernel: Zone ranges:
Mar  1 04:00:45 np0005634532 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Mar  1 04:00:45 np0005634532 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Mar  1 04:00:45 np0005634532 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Mar  1 04:00:45 np0005634532 kernel:  Device   empty
Mar  1 04:00:45 np0005634532 kernel: Movable zone start for each node
Mar  1 04:00:45 np0005634532 kernel: Early memory node ranges
Mar  1 04:00:45 np0005634532 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Mar  1 04:00:45 np0005634532 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Mar  1 04:00:45 np0005634532 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Mar  1 04:00:45 np0005634532 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Mar  1 04:00:45 np0005634532 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Mar  1 04:00:45 np0005634532 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Mar  1 04:00:45 np0005634532 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Mar  1 04:00:45 np0005634532 kernel: ACPI: PM-Timer IO Port: 0x608
Mar  1 04:00:45 np0005634532 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Mar  1 04:00:45 np0005634532 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Mar  1 04:00:45 np0005634532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Mar  1 04:00:45 np0005634532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Mar  1 04:00:45 np0005634532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Mar  1 04:00:45 np0005634532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Mar  1 04:00:45 np0005634532 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Mar  1 04:00:45 np0005634532 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Mar  1 04:00:45 np0005634532 kernel: TSC deadline timer available
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Max. logical packages:   8
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Max. logical dies:       8
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Max. dies per package:   1
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Max. threads per core:   1
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Num. cores per package:     1
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Num. threads per package:   1
Mar  1 04:00:45 np0005634532 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Mar  1 04:00:45 np0005634532 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Mar  1 04:00:45 np0005634532 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Mar  1 04:00:45 np0005634532 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Mar  1 04:00:45 np0005634532 kernel: Booting paravirtualized kernel on KVM
Mar  1 04:00:45 np0005634532 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Mar  1 04:00:45 np0005634532 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Mar  1 04:00:45 np0005634532 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Mar  1 04:00:45 np0005634532 kernel: kvm-guest: PV spinlocks disabled, no host support
Mar  1 04:00:45 np0005634532 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Mar  1 04:00:45 np0005634532 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64", will be passed to user space.
Mar  1 04:00:45 np0005634532 kernel: random: crng init done
Mar  1 04:00:45 np0005634532 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: Fallback order for Node 0: 0 
Mar  1 04:00:45 np0005634532 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Mar  1 04:00:45 np0005634532 kernel: Policy zone: Normal
Mar  1 04:00:45 np0005634532 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Mar  1 04:00:45 np0005634532 kernel: software IO TLB: area num 8.
Mar  1 04:00:45 np0005634532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Mar  1 04:00:45 np0005634532 kernel: ftrace: allocating 49605 entries in 194 pages
Mar  1 04:00:45 np0005634532 kernel: ftrace: allocated 194 pages with 3 groups
Mar  1 04:00:45 np0005634532 kernel: Dynamic Preempt: voluntary
Mar  1 04:00:45 np0005634532 kernel: rcu: Preemptible hierarchical RCU implementation.
Mar  1 04:00:45 np0005634532 kernel: rcu: #011RCU event tracing is enabled.
Mar  1 04:00:45 np0005634532 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Mar  1 04:00:45 np0005634532 kernel: #011Trampoline variant of Tasks RCU enabled.
Mar  1 04:00:45 np0005634532 kernel: #011Rude variant of Tasks RCU enabled.
Mar  1 04:00:45 np0005634532 kernel: #011Tracing variant of Tasks RCU enabled.
Mar  1 04:00:45 np0005634532 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Mar  1 04:00:45 np0005634532 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Mar  1 04:00:45 np0005634532 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Mar  1 04:00:45 np0005634532 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Mar  1 04:00:45 np0005634532 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Mar  1 04:00:45 np0005634532 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Mar  1 04:00:45 np0005634532 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Mar  1 04:00:45 np0005634532 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Mar  1 04:00:45 np0005634532 kernel: Console: colour VGA+ 80x25
Mar  1 04:00:45 np0005634532 kernel: printk: console [ttyS0] enabled
Mar  1 04:00:45 np0005634532 kernel: ACPI: Core revision 20230331
Mar  1 04:00:45 np0005634532 kernel: APIC: Switch to symmetric I/O mode setup
Mar  1 04:00:45 np0005634532 kernel: x2apic enabled
Mar  1 04:00:45 np0005634532 kernel: APIC: Switched APIC routing to: physical x2apic
Mar  1 04:00:45 np0005634532 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Mar  1 04:00:45 np0005634532 kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Mar  1 04:00:45 np0005634532 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Mar  1 04:00:45 np0005634532 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Mar  1 04:00:45 np0005634532 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Mar  1 04:00:45 np0005634532 kernel: mitigations: Enabled attack vectors: user_kernel, user_user, guest_host, guest_guest, SMT mitigations: auto
Mar  1 04:00:45 np0005634532 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Mar  1 04:00:45 np0005634532 kernel: Spectre V2 : Mitigation: Retpolines
Mar  1 04:00:45 np0005634532 kernel: RETBleed: Mitigation: untrained return thunk
Mar  1 04:00:45 np0005634532 kernel: Speculative Return Stack Overflow: Mitigation: SMT disabled
Mar  1 04:00:45 np0005634532 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Mar  1 04:00:45 np0005634532 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Mar  1 04:00:45 np0005634532 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Mar  1 04:00:45 np0005634532 kernel: active return thunk: retbleed_return_thunk
Mar  1 04:00:45 np0005634532 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Mar  1 04:00:45 np0005634532 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Mar  1 04:00:45 np0005634532 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Mar  1 04:00:45 np0005634532 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Mar  1 04:00:45 np0005634532 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Mar  1 04:00:45 np0005634532 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Mar  1 04:00:45 np0005634532 kernel: Freeing SMP alternatives memory: 40K
Mar  1 04:00:45 np0005634532 kernel: pid_max: default: 32768 minimum: 301
Mar  1 04:00:45 np0005634532 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Mar  1 04:00:45 np0005634532 kernel: landlock: Up and running.
Mar  1 04:00:45 np0005634532 kernel: Yama: becoming mindful.
Mar  1 04:00:45 np0005634532 kernel: SELinux:  Initializing.
Mar  1 04:00:45 np0005634532 kernel: LSM support for eBPF active
Mar  1 04:00:45 np0005634532 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Mar  1 04:00:45 np0005634532 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Mar  1 04:00:45 np0005634532 kernel: ... version:                0
Mar  1 04:00:45 np0005634532 kernel: ... bit width:              48
Mar  1 04:00:45 np0005634532 kernel: ... generic registers:      6
Mar  1 04:00:45 np0005634532 kernel: ... value mask:             0000ffffffffffff
Mar  1 04:00:45 np0005634532 kernel: ... max period:             00007fffffffffff
Mar  1 04:00:45 np0005634532 kernel: ... fixed-purpose events:   0
Mar  1 04:00:45 np0005634532 kernel: ... event mask:             000000000000003f
Mar  1 04:00:45 np0005634532 kernel: signal: max sigframe size: 1776
Mar  1 04:00:45 np0005634532 kernel: rcu: Hierarchical SRCU implementation.
Mar  1 04:00:45 np0005634532 kernel: rcu: #011Max phase no-delay instances is 400.
Mar  1 04:00:45 np0005634532 kernel: smp: Bringing up secondary CPUs ...
Mar  1 04:00:45 np0005634532 kernel: smpboot: x86: Booting SMP configuration:
Mar  1 04:00:45 np0005634532 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Mar  1 04:00:45 np0005634532 kernel: smp: Brought up 1 node, 8 CPUs
Mar  1 04:00:45 np0005634532 kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Mar  1 04:00:45 np0005634532 kernel: node 0 deferred pages initialised in 11ms
Mar  1 04:00:45 np0005634532 kernel: Memory: 7617664K/8388068K available (16384K kernel code, 5797K rwdata, 13956K rodata, 4204K init, 7172K bss, 764460K reserved, 0K cma-reserved)
Mar  1 04:00:45 np0005634532 kernel: devtmpfs: initialized
Mar  1 04:00:45 np0005634532 kernel: x86/mm: Memory block size: 128MB
Mar  1 04:00:45 np0005634532 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Mar  1 04:00:45 np0005634532 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Mar  1 04:00:45 np0005634532 kernel: pinctrl core: initialized pinctrl subsystem
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Mar  1 04:00:45 np0005634532 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Mar  1 04:00:45 np0005634532 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Mar  1 04:00:45 np0005634532 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Mar  1 04:00:45 np0005634532 kernel: audit: initializing netlink subsys (disabled)
Mar  1 04:00:45 np0005634532 kernel: audit: type=2000 audit(1772355643.794:1): state=initialized audit_enabled=0 res=1
Mar  1 04:00:45 np0005634532 kernel: thermal_sys: Registered thermal governor 'fair_share'
Mar  1 04:00:45 np0005634532 kernel: thermal_sys: Registered thermal governor 'step_wise'
Mar  1 04:00:45 np0005634532 kernel: thermal_sys: Registered thermal governor 'user_space'
Mar  1 04:00:45 np0005634532 kernel: cpuidle: using governor menu
Mar  1 04:00:45 np0005634532 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Mar  1 04:00:45 np0005634532 kernel: PCI: Using configuration type 1 for base access
Mar  1 04:00:45 np0005634532 kernel: PCI: Using configuration type 1 for extended access
Mar  1 04:00:45 np0005634532 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Mar  1 04:00:45 np0005634532 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Mar  1 04:00:45 np0005634532 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Mar  1 04:00:45 np0005634532 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Mar  1 04:00:45 np0005634532 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Mar  1 04:00:45 np0005634532 kernel: Demotion targets for Node 0: null
Mar  1 04:00:45 np0005634532 kernel: cryptd: max_cpu_qlen set to 1000
Mar  1 04:00:45 np0005634532 kernel: ACPI: Added _OSI(Module Device)
Mar  1 04:00:45 np0005634532 kernel: ACPI: Added _OSI(Processor Device)
Mar  1 04:00:45 np0005634532 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Mar  1 04:00:45 np0005634532 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Mar  1 04:00:45 np0005634532 kernel: ACPI: Interpreter enabled
Mar  1 04:00:45 np0005634532 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Mar  1 04:00:45 np0005634532 kernel: ACPI: Using IOAPIC for interrupt routing
Mar  1 04:00:45 np0005634532 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Mar  1 04:00:45 np0005634532 kernel: PCI: Using E820 reservations for host bridge windows
Mar  1 04:00:45 np0005634532 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Mar  1 04:00:45 np0005634532 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [3] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [4] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [5] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [6] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [7] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [8] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [9] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [10] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [11] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [12] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [13] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [14] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [15] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [16] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [17] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [18] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [19] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [20] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [21] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [22] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [23] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [24] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [25] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [26] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [27] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [28] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [29] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [30] registered
Mar  1 04:00:45 np0005634532 kernel: acpiphp: Slot [31] registered
Mar  1 04:00:45 np0005634532 kernel: PCI host bridge to bus 0000:00
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Mar  1 04:00:45 np0005634532 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Mar  1 04:00:45 np0005634532 kernel: iommu: Default domain type: Translated
Mar  1 04:00:45 np0005634532 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Mar  1 04:00:45 np0005634532 kernel: SCSI subsystem initialized
Mar  1 04:00:45 np0005634532 kernel: ACPI: bus type USB registered
Mar  1 04:00:45 np0005634532 kernel: usbcore: registered new interface driver usbfs
Mar  1 04:00:45 np0005634532 kernel: usbcore: registered new interface driver hub
Mar  1 04:00:45 np0005634532 kernel: usbcore: registered new device driver usb
Mar  1 04:00:45 np0005634532 kernel: pps_core: LinuxPPS API ver. 1 registered
Mar  1 04:00:45 np0005634532 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Mar  1 04:00:45 np0005634532 kernel: PTP clock support registered
Mar  1 04:00:45 np0005634532 kernel: EDAC MC: Ver: 3.0.0
Mar  1 04:00:45 np0005634532 kernel: NetLabel: Initializing
Mar  1 04:00:45 np0005634532 kernel: NetLabel:  domain hash size = 128
Mar  1 04:00:45 np0005634532 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Mar  1 04:00:45 np0005634532 kernel: NetLabel:  unlabeled traffic allowed by default
Mar  1 04:00:45 np0005634532 kernel: PCI: Using ACPI for IRQ routing
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Mar  1 04:00:45 np0005634532 kernel: vgaarb: loaded
Mar  1 04:00:45 np0005634532 kernel: clocksource: Switched to clocksource kvm-clock
Mar  1 04:00:45 np0005634532 kernel: VFS: Disk quotas dquot_6.6.0
Mar  1 04:00:45 np0005634532 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Mar  1 04:00:45 np0005634532 kernel: pnp: PnP ACPI init
Mar  1 04:00:45 np0005634532 kernel: pnp: PnP ACPI: found 5 devices
Mar  1 04:00:45 np0005634532 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_INET protocol family
Mar  1 04:00:45 np0005634532 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Mar  1 04:00:45 np0005634532 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_XDP protocol family
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Mar  1 04:00:45 np0005634532 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Mar  1 04:00:45 np0005634532 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Mar  1 04:00:45 np0005634532 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 52697 usecs
Mar  1 04:00:45 np0005634532 kernel: PCI: CLS 0 bytes, default 64
Mar  1 04:00:45 np0005634532 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Mar  1 04:00:45 np0005634532 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Mar  1 04:00:45 np0005634532 kernel: ACPI: bus type thunderbolt registered
Mar  1 04:00:45 np0005634532 kernel: Trying to unpack rootfs image as initramfs...
Mar  1 04:00:45 np0005634532 kernel: Initialise system trusted keyrings
Mar  1 04:00:45 np0005634532 kernel: Key type blacklist registered
Mar  1 04:00:45 np0005634532 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Mar  1 04:00:45 np0005634532 kernel: zbud: loaded
Mar  1 04:00:45 np0005634532 kernel: integrity: Platform Keyring initialized
Mar  1 04:00:45 np0005634532 kernel: integrity: Machine keyring initialized
Mar  1 04:00:45 np0005634532 kernel: Freeing initrd memory: 234060K
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_ALG protocol family
Mar  1 04:00:45 np0005634532 kernel: xor: automatically using best checksumming function   avx       
Mar  1 04:00:45 np0005634532 kernel: Key type asymmetric registered
Mar  1 04:00:45 np0005634532 kernel: Asymmetric key parser 'x509' registered
Mar  1 04:00:45 np0005634532 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Mar  1 04:00:45 np0005634532 kernel: io scheduler mq-deadline registered
Mar  1 04:00:45 np0005634532 kernel: io scheduler kyber registered
Mar  1 04:00:45 np0005634532 kernel: io scheduler bfq registered
Mar  1 04:00:45 np0005634532 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Mar  1 04:00:45 np0005634532 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Mar  1 04:00:45 np0005634532 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Mar  1 04:00:45 np0005634532 kernel: ACPI: button: Power Button [PWRF]
Mar  1 04:00:45 np0005634532 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Mar  1 04:00:45 np0005634532 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Mar  1 04:00:45 np0005634532 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Mar  1 04:00:45 np0005634532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Mar  1 04:00:45 np0005634532 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Mar  1 04:00:45 np0005634532 kernel: Non-volatile memory driver v1.3
Mar  1 04:00:45 np0005634532 kernel: rdac: device handler registered
Mar  1 04:00:45 np0005634532 kernel: hp_sw: device handler registered
Mar  1 04:00:45 np0005634532 kernel: emc: device handler registered
Mar  1 04:00:45 np0005634532 kernel: alua: device handler registered
Mar  1 04:00:45 np0005634532 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Mar  1 04:00:45 np0005634532 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Mar  1 04:00:45 np0005634532 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Mar  1 04:00:45 np0005634532 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Mar  1 04:00:45 np0005634532 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Mar  1 04:00:45 np0005634532 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Mar  1 04:00:45 np0005634532 kernel: usb usb1: Product: UHCI Host Controller
Mar  1 04:00:45 np0005634532 kernel: usb usb1: Manufacturer: Linux 5.14.0-686.el9.x86_64 uhci_hcd
Mar  1 04:00:45 np0005634532 kernel: usb usb1: SerialNumber: 0000:00:01.2
Mar  1 04:00:45 np0005634532 kernel: hub 1-0:1.0: USB hub found
Mar  1 04:00:45 np0005634532 kernel: hub 1-0:1.0: 2 ports detected
Mar  1 04:00:45 np0005634532 kernel: usbcore: registered new interface driver usbserial_generic
Mar  1 04:00:45 np0005634532 kernel: usbserial: USB Serial support registered for generic
Mar  1 04:00:45 np0005634532 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Mar  1 04:00:45 np0005634532 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Mar  1 04:00:45 np0005634532 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Mar  1 04:00:45 np0005634532 kernel: mousedev: PS/2 mouse device common for all mice
Mar  1 04:00:45 np0005634532 kernel: rtc_cmos 00:04: RTC can wake from S4
Mar  1 04:00:45 np0005634532 kernel: rtc_cmos 00:04: registered as rtc0
Mar  1 04:00:45 np0005634532 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Mar  1 04:00:45 np0005634532 kernel: rtc_cmos 00:04: setting system clock to 2026-03-01T09:00:44 UTC (1772355644)
Mar  1 04:00:45 np0005634532 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Mar  1 04:00:45 np0005634532 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Mar  1 04:00:45 np0005634532 kernel: hid: raw HID events driver (C) Jiri Kosina
Mar  1 04:00:45 np0005634532 kernel: usbcore: registered new interface driver usbhid
Mar  1 04:00:45 np0005634532 kernel: usbhid: USB HID core driver
Mar  1 04:00:45 np0005634532 kernel: drop_monitor: Initializing network drop monitor service
Mar  1 04:00:45 np0005634532 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Mar  1 04:00:45 np0005634532 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Mar  1 04:00:45 np0005634532 kernel: Initializing XFRM netlink socket
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_INET6 protocol family
Mar  1 04:00:45 np0005634532 kernel: Segment Routing with IPv6
Mar  1 04:00:45 np0005634532 kernel: NET: Registered PF_PACKET protocol family
Mar  1 04:00:45 np0005634532 kernel: mpls_gso: MPLS GSO support
Mar  1 04:00:45 np0005634532 kernel: IPI shorthand broadcast: enabled
Mar  1 04:00:45 np0005634532 kernel: AVX2 version of gcm_enc/dec engaged.
Mar  1 04:00:45 np0005634532 kernel: AES CTR mode by8 optimization enabled
Mar  1 04:00:45 np0005634532 kernel: sched_clock: Marking stable (1200009240, 143233330)->(1460690300, -117447730)
Mar  1 04:00:45 np0005634532 kernel: registered taskstats version 1
Mar  1 04:00:45 np0005634532 kernel: Loading compiled-in X.509 certificates
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Mar  1 04:00:45 np0005634532 kernel: Demotion targets for Node 0: null
Mar  1 04:00:45 np0005634532 kernel: page_owner is disabled
Mar  1 04:00:45 np0005634532 kernel: Key type .fscrypt registered
Mar  1 04:00:45 np0005634532 kernel: Key type fscrypt-provisioning registered
Mar  1 04:00:45 np0005634532 kernel: Key type big_key registered
Mar  1 04:00:45 np0005634532 kernel: Key type encrypted registered
Mar  1 04:00:45 np0005634532 kernel: ima: No TPM chip found, activating TPM-bypass!
Mar  1 04:00:45 np0005634532 kernel: Loading compiled-in module X.509 certificates
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: d9d4cefd3ca2c4957ef0b2e7c6e39a7e4ae16390'
Mar  1 04:00:45 np0005634532 kernel: ima: Allocated hash algorithm: sha256
Mar  1 04:00:45 np0005634532 kernel: ima: No architecture policies found
Mar  1 04:00:45 np0005634532 kernel: evm: Initialising EVM extended attributes:
Mar  1 04:00:45 np0005634532 kernel: evm: security.selinux
Mar  1 04:00:45 np0005634532 kernel: evm: security.SMACK64 (disabled)
Mar  1 04:00:45 np0005634532 kernel: evm: security.SMACK64EXEC (disabled)
Mar  1 04:00:45 np0005634532 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Mar  1 04:00:45 np0005634532 kernel: evm: security.SMACK64MMAP (disabled)
Mar  1 04:00:45 np0005634532 kernel: evm: security.apparmor (disabled)
Mar  1 04:00:45 np0005634532 kernel: evm: security.ima
Mar  1 04:00:45 np0005634532 kernel: evm: security.capability
Mar  1 04:00:45 np0005634532 kernel: evm: HMAC attrs: 0x1
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Mar  1 04:00:45 np0005634532 kernel: Running certificate verification RSA selftest
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Mar  1 04:00:45 np0005634532 kernel: Running certificate verification ECDSA selftest
Mar  1 04:00:45 np0005634532 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Mar  1 04:00:45 np0005634532 kernel: clk: Disabling unused clocks
Mar  1 04:00:45 np0005634532 kernel: Freeing unused decrypted memory: 2028K
Mar  1 04:00:45 np0005634532 kernel: Freeing unused kernel image (initmem) memory: 4204K
Mar  1 04:00:45 np0005634532 kernel: Write protecting the kernel read-only data: 30720k
Mar  1 04:00:45 np0005634532 kernel: Freeing unused kernel image (rodata/data gap) memory: 380K
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: Product: QEMU USB Tablet
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: Manufacturer: QEMU
Mar  1 04:00:45 np0005634532 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Mar  1 04:00:45 np0005634532 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Mar  1 04:00:45 np0005634532 kernel: Run /init as init process
Mar  1 04:00:45 np0005634532 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Mar  1 04:00:45 np0005634532 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Mar  1 04:00:45 np0005634532 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Mar  1 04:00:45 np0005634532 systemd: Detected virtualization kvm.
Mar  1 04:00:45 np0005634532 systemd: Detected architecture x86-64.
Mar  1 04:00:45 np0005634532 systemd: Running in initrd.
Mar  1 04:00:45 np0005634532 systemd: No hostname configured, using default hostname.
Mar  1 04:00:45 np0005634532 systemd: Hostname set to <localhost>.
Mar  1 04:00:45 np0005634532 systemd: Initializing machine ID from VM UUID.
Mar  1 04:00:45 np0005634532 systemd: Queued start job for default target Initrd Default Target.
Mar  1 04:00:45 np0005634532 systemd: Started Dispatch Password Requests to Console Directory Watch.
Mar  1 04:00:45 np0005634532 systemd: Reached target Local Encrypted Volumes.
Mar  1 04:00:45 np0005634532 systemd: Reached target Initrd /usr File System.
Mar  1 04:00:45 np0005634532 systemd: Reached target Local File Systems.
Mar  1 04:00:45 np0005634532 systemd: Reached target Path Units.
Mar  1 04:00:45 np0005634532 systemd: Reached target Slice Units.
Mar  1 04:00:45 np0005634532 systemd: Reached target Swaps.
Mar  1 04:00:45 np0005634532 systemd: Reached target Timer Units.
Mar  1 04:00:45 np0005634532 systemd: Listening on D-Bus System Message Bus Socket.
Mar  1 04:00:45 np0005634532 systemd: Listening on Journal Socket (/dev/log).
Mar  1 04:00:45 np0005634532 systemd: Listening on Journal Socket.
Mar  1 04:00:45 np0005634532 systemd: Listening on udev Control Socket.
Mar  1 04:00:45 np0005634532 systemd: Listening on udev Kernel Socket.
Mar  1 04:00:45 np0005634532 systemd: Reached target Socket Units.
Mar  1 04:00:45 np0005634532 systemd: Starting Create List of Static Device Nodes...
Mar  1 04:00:45 np0005634532 systemd: Starting Journal Service...
Mar  1 04:00:45 np0005634532 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Mar  1 04:00:45 np0005634532 systemd: Starting Apply Kernel Variables...
Mar  1 04:00:45 np0005634532 systemd: Starting Create System Users...
Mar  1 04:00:45 np0005634532 systemd: Starting Setup Virtual Console...
Mar  1 04:00:45 np0005634532 systemd: Finished Create List of Static Device Nodes.
Mar  1 04:00:45 np0005634532 systemd: Finished Apply Kernel Variables.
Mar  1 04:00:45 np0005634532 systemd: Finished Create System Users.
Mar  1 04:00:45 np0005634532 systemd-journald[306]: Journal started
Mar  1 04:00:45 np0005634532 systemd-journald[306]: Runtime Journal (/run/log/journal/6160888c43c94b54beddc53838a90ca3) is 8.0M, max 153.6M, 145.6M free.
Mar  1 04:00:45 np0005634532 systemd-sysusers[311]: Creating group 'users' with GID 100.
Mar  1 04:00:45 np0005634532 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Mar  1 04:00:45 np0005634532 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Mar  1 04:00:45 np0005634532 systemd: Started Journal Service.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting Create Static Device Nodes in /dev...
Mar  1 04:00:45 np0005634532 systemd[1]: Starting Create Volatile Files and Directories...
Mar  1 04:00:45 np0005634532 systemd[1]: Finished Create Static Device Nodes in /dev.
Mar  1 04:00:45 np0005634532 systemd[1]: Finished Create Volatile Files and Directories.
Mar  1 04:00:45 np0005634532 systemd[1]: Finished Setup Virtual Console.
Mar  1 04:00:45 np0005634532 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting dracut cmdline hook...
Mar  1 04:00:45 np0005634532 dracut-cmdline[326]: dracut-9 dracut-057-110.git20260130.el9
Mar  1 04:00:45 np0005634532 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-686.el9.x86_64 root=UUID=37391a25-080d-4723-8b0c-cb88a559875b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Mar  1 04:00:45 np0005634532 systemd[1]: Finished dracut cmdline hook.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting dracut pre-udev hook...
Mar  1 04:00:45 np0005634532 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Mar  1 04:00:45 np0005634532 kernel: device-mapper: uevent: version 1.0.3
Mar  1 04:00:45 np0005634532 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Mar  1 04:00:45 np0005634532 kernel: RPC: Registered named UNIX socket transport module.
Mar  1 04:00:45 np0005634532 kernel: RPC: Registered udp transport module.
Mar  1 04:00:45 np0005634532 kernel: RPC: Registered tcp transport module.
Mar  1 04:00:45 np0005634532 kernel: RPC: Registered tcp-with-tls transport module.
Mar  1 04:00:45 np0005634532 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Mar  1 04:00:45 np0005634532 rpc.statd[444]: Version 2.5.4 starting
Mar  1 04:00:45 np0005634532 rpc.statd[444]: Initializing NSM state
Mar  1 04:00:45 np0005634532 rpc.idmapd[449]: Setting log level to 0
Mar  1 04:00:45 np0005634532 systemd[1]: Finished dracut pre-udev hook.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Mar  1 04:00:45 np0005634532 systemd-udevd[462]: Using default interface naming scheme 'rhel-9.0'.
Mar  1 04:00:45 np0005634532 systemd[1]: Started Rule-based Manager for Device Events and Files.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting dracut pre-trigger hook...
Mar  1 04:00:45 np0005634532 systemd[1]: Finished dracut pre-trigger hook.
Mar  1 04:00:45 np0005634532 systemd[1]: Starting Coldplug All udev Devices...
Mar  1 04:00:46 np0005634532 systemd[1]: Created slice Slice /system/modprobe.
Mar  1 04:00:46 np0005634532 systemd[1]: Starting Load Kernel Module configfs...
Mar  1 04:00:46 np0005634532 systemd[1]: Finished Coldplug All udev Devices.
Mar  1 04:00:46 np0005634532 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Mar  1 04:00:46 np0005634532 systemd[1]: Finished Load Kernel Module configfs.
Mar  1 04:00:46 np0005634532 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Network.
Mar  1 04:00:46 np0005634532 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Mar  1 04:00:46 np0005634532 systemd[1]: Starting dracut initqueue hook...
Mar  1 04:00:46 np0005634532 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Mar  1 04:00:46 np0005634532 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Mar  1 04:00:46 np0005634532 kernel: vda: vda1
Mar  1 04:00:46 np0005634532 kernel: scsi host0: ata_piix
Mar  1 04:00:46 np0005634532 kernel: scsi host1: ata_piix
Mar  1 04:00:46 np0005634532 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Mar  1 04:00:46 np0005634532 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Mar  1 04:00:46 np0005634532 systemd[1]: Mounting Kernel Configuration File System...
Mar  1 04:00:46 np0005634532 systemd[1]: Mounted Kernel Configuration File System.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target System Initialization.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Basic System.
Mar  1 04:00:46 np0005634532 systemd-udevd[465]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:00:46 np0005634532 kernel: ACPI: bus type drm_connector registered
Mar  1 04:00:46 np0005634532 systemd[1]: Found device /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Initrd Root Device.
Mar  1 04:00:46 np0005634532 kernel: ata1: found unknown device (class 0)
Mar  1 04:00:46 np0005634532 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Mar  1 04:00:46 np0005634532 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Mar  1 04:00:46 np0005634532 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Mar  1 04:00:46 np0005634532 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Mar  1 04:00:46 np0005634532 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Mar  1 04:00:46 np0005634532 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Mar  1 04:00:46 np0005634532 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Mar  1 04:00:46 np0005634532 kernel: Console: switching to colour dummy device 80x25
Mar  1 04:00:46 np0005634532 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Mar  1 04:00:46 np0005634532 kernel: [drm] features: -context_init
Mar  1 04:00:46 np0005634532 kernel: [drm] number of scanouts: 1
Mar  1 04:00:46 np0005634532 kernel: [drm] number of cap sets: 0
Mar  1 04:00:46 np0005634532 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Mar  1 04:00:46 np0005634532 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Mar  1 04:00:46 np0005634532 kernel: Console: switching to colour frame buffer device 128x48
Mar  1 04:00:46 np0005634532 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Mar  1 04:00:46 np0005634532 systemd[1]: Finished dracut initqueue hook.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Preparation for Remote File Systems.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Remote Encrypted Volumes.
Mar  1 04:00:46 np0005634532 systemd[1]: Reached target Remote File Systems.
Mar  1 04:00:46 np0005634532 systemd[1]: Starting dracut pre-mount hook...
Mar  1 04:00:46 np0005634532 systemd[1]: Finished dracut pre-mount hook.
Mar  1 04:00:46 np0005634532 systemd[1]: Starting File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b...
Mar  1 04:00:46 np0005634532 systemd-fsck[567]: /usr/sbin/fsck.xfs: XFS file system.
Mar  1 04:00:46 np0005634532 systemd[1]: Finished File System Check on /dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b.
Mar  1 04:00:46 np0005634532 systemd[1]: Mounting /sysroot...
Mar  1 04:00:47 np0005634532 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Mar  1 04:00:47 np0005634532 kernel: XFS (vda1): Mounting V5 Filesystem 37391a25-080d-4723-8b0c-cb88a559875b
Mar  1 04:00:47 np0005634532 kernel: XFS (vda1): Ending clean mount
Mar  1 04:00:47 np0005634532 systemd[1]: Mounted /sysroot.
Mar  1 04:00:47 np0005634532 systemd[1]: Reached target Initrd Root File System.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting Mountpoints Configured in the Real Root...
Mar  1 04:00:47 np0005634532 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Finished Mountpoints Configured in the Real Root.
Mar  1 04:00:47 np0005634532 systemd[1]: Reached target Initrd File Systems.
Mar  1 04:00:47 np0005634532 systemd[1]: Reached target Initrd Default Target.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting dracut mount hook...
Mar  1 04:00:47 np0005634532 systemd[1]: Finished dracut mount hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Mar  1 04:00:47 np0005634532 rpc.idmapd[449]: exiting on signal 15
Mar  1 04:00:47 np0005634532 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Network.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Remote Encrypted Volumes.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Timer Units.
Mar  1 04:00:47 np0005634532 systemd[1]: dbus.socket: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Closed D-Bus System Message Bus Socket.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Initrd Default Target.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Basic System.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Initrd Root Device.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Initrd /usr File System.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Path Units.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Remote File Systems.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Preparation for Remote File Systems.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Slice Units.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Socket Units.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target System Initialization.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Local File Systems.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Swaps.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-mount.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut mount hook.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut pre-mount hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped target Local Encrypted Volumes.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut initqueue hook.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Apply Kernel Variables.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Create Volatile Files and Directories.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Coldplug All udev Devices.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut pre-trigger hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Setup Virtual Console.
Mar  1 04:00:47 np0005634532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-udevd.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-udevd.service: Consumed 1.163s CPU time.
Mar  1 04:00:47 np0005634532 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Closed udev Control Socket.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Closed udev Kernel Socket.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut pre-udev hook.
Mar  1 04:00:47 np0005634532 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped dracut cmdline hook.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting Cleanup udev Database...
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Create Static Device Nodes in /dev.
Mar  1 04:00:47 np0005634532 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Create List of Static Device Nodes.
Mar  1 04:00:47 np0005634532 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Stopped Create System Users.
Mar  1 04:00:47 np0005634532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Mar  1 04:00:47 np0005634532 systemd[1]: Finished Cleanup udev Database.
Mar  1 04:00:47 np0005634532 systemd[1]: Reached target Switch Root.
Mar  1 04:00:47 np0005634532 systemd[1]: Starting Switch Root...
Mar  1 04:00:47 np0005634532 systemd[1]: Switching root.
Mar  1 04:00:47 np0005634532 systemd-journald[306]: Journal stopped
Mar  1 04:00:49 np0005634532 systemd-journald: Received SIGTERM from PID 1 (systemd).
Mar  1 04:00:49 np0005634532 kernel: audit: type=1404 audit(1772355648.382:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:00:49 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:00:49 np0005634532 kernel: audit: type=1403 audit(1772355648.563:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Mar  1 04:00:49 np0005634532 systemd: Successfully loaded SELinux policy in 192.509ms.
Mar  1 04:00:49 np0005634532 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 38.370ms.
Mar  1 04:00:49 np0005634532 systemd: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Mar  1 04:00:49 np0005634532 systemd: Detected virtualization kvm.
Mar  1 04:00:49 np0005634532 systemd: Detected architecture x86-64.
Mar  1 04:00:49 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:00:49 np0005634532 systemd: initrd-switch-root.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd: Stopped Switch Root.
Mar  1 04:00:49 np0005634532 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Mar  1 04:00:49 np0005634532 systemd: Created slice Slice /system/getty.
Mar  1 04:00:49 np0005634532 systemd: Created slice Slice /system/serial-getty.
Mar  1 04:00:49 np0005634532 systemd: Created slice Slice /system/sshd-keygen.
Mar  1 04:00:49 np0005634532 systemd: Created slice User and Session Slice.
Mar  1 04:00:49 np0005634532 systemd: Started Dispatch Password Requests to Console Directory Watch.
Mar  1 04:00:49 np0005634532 systemd: Started Forward Password Requests to Wall Directory Watch.
Mar  1 04:00:49 np0005634532 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Mar  1 04:00:49 np0005634532 systemd: Reached target Local Encrypted Volumes.
Mar  1 04:00:49 np0005634532 systemd: Stopped target Switch Root.
Mar  1 04:00:49 np0005634532 systemd: Stopped target Initrd File Systems.
Mar  1 04:00:49 np0005634532 systemd: Stopped target Initrd Root File System.
Mar  1 04:00:49 np0005634532 systemd: Reached target Local Integrity Protected Volumes.
Mar  1 04:00:49 np0005634532 systemd: Reached target Path Units.
Mar  1 04:00:49 np0005634532 systemd: Reached target rpc_pipefs.target.
Mar  1 04:00:49 np0005634532 systemd: Reached target Slice Units.
Mar  1 04:00:49 np0005634532 systemd: Reached target Swaps.
Mar  1 04:00:49 np0005634532 systemd: Reached target Local Verity Protected Volumes.
Mar  1 04:00:49 np0005634532 systemd: Listening on RPCbind Server Activation Socket.
Mar  1 04:00:49 np0005634532 systemd: Reached target RPC Port Mapper.
Mar  1 04:00:49 np0005634532 systemd: Listening on Process Core Dump Socket.
Mar  1 04:00:49 np0005634532 systemd: Listening on initctl Compatibility Named Pipe.
Mar  1 04:00:49 np0005634532 systemd: Listening on udev Control Socket.
Mar  1 04:00:49 np0005634532 systemd: Listening on udev Kernel Socket.
Mar  1 04:00:49 np0005634532 systemd: Mounting Huge Pages File System...
Mar  1 04:00:49 np0005634532 systemd: Mounting POSIX Message Queue File System...
Mar  1 04:00:49 np0005634532 systemd: Mounting Kernel Debug File System...
Mar  1 04:00:49 np0005634532 systemd: Mounting Kernel Trace File System...
Mar  1 04:00:49 np0005634532 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Mar  1 04:00:49 np0005634532 systemd: Starting Create List of Static Device Nodes...
Mar  1 04:00:49 np0005634532 systemd: Starting Load Kernel Module configfs...
Mar  1 04:00:49 np0005634532 systemd: Starting Load Kernel Module drm...
Mar  1 04:00:49 np0005634532 systemd: Starting Load Kernel Module efi_pstore...
Mar  1 04:00:49 np0005634532 systemd: Starting Load Kernel Module fuse...
Mar  1 04:00:49 np0005634532 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Mar  1 04:00:49 np0005634532 systemd: systemd-fsck-root.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd: Stopped File System Check on Root Device.
Mar  1 04:00:49 np0005634532 systemd: Stopped Journal Service.
Mar  1 04:00:49 np0005634532 systemd: Starting Journal Service...
Mar  1 04:00:49 np0005634532 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Mar  1 04:00:49 np0005634532 systemd: Starting Generate network units from Kernel command line...
Mar  1 04:00:49 np0005634532 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar  1 04:00:49 np0005634532 systemd: Starting Remount Root and Kernel File Systems...
Mar  1 04:00:49 np0005634532 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Mar  1 04:00:49 np0005634532 systemd: Starting Apply Kernel Variables...
Mar  1 04:00:49 np0005634532 kernel: fuse: init (API version 7.37)
Mar  1 04:00:49 np0005634532 systemd: Starting Coldplug All udev Devices...
Mar  1 04:00:49 np0005634532 systemd-journald[697]: Journal started
Mar  1 04:00:49 np0005634532 systemd-journald[697]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Mar  1 04:00:49 np0005634532 systemd[1]: Queued start job for default target Multi-User System.
Mar  1 04:00:49 np0005634532 systemd[1]: systemd-journald.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd: Mounted Huge Pages File System.
Mar  1 04:00:49 np0005634532 systemd: Started Journal Service.
Mar  1 04:00:49 np0005634532 systemd[1]: Mounted POSIX Message Queue File System.
Mar  1 04:00:49 np0005634532 systemd[1]: Mounted Kernel Debug File System.
Mar  1 04:00:49 np0005634532 systemd[1]: Mounted Kernel Trace File System.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Create List of Static Device Nodes.
Mar  1 04:00:49 np0005634532 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Load Kernel Module configfs.
Mar  1 04:00:49 np0005634532 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Mar  1 04:00:49 np0005634532 systemd[1]: modprobe@drm.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Load Kernel Module drm.
Mar  1 04:00:49 np0005634532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Load Kernel Module efi_pstore.
Mar  1 04:00:49 np0005634532 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Load Kernel Module fuse.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Generate network units from Kernel command line.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Remount Root and Kernel File Systems.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Apply Kernel Variables.
Mar  1 04:00:49 np0005634532 systemd[1]: Mounting FUSE Control File System...
Mar  1 04:00:49 np0005634532 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Mar  1 04:00:49 np0005634532 systemd[1]: Starting Rebuild Hardware Database...
Mar  1 04:00:49 np0005634532 systemd[1]: Starting Flush Journal to Persistent Storage...
Mar  1 04:00:49 np0005634532 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Mar  1 04:00:49 np0005634532 systemd[1]: Starting Load/Save OS Random Seed...
Mar  1 04:00:49 np0005634532 systemd[1]: Starting Create System Users...
Mar  1 04:00:49 np0005634532 systemd[1]: Mounted FUSE Control File System.
Mar  1 04:00:49 np0005634532 systemd-journald[697]: Runtime Journal (/run/log/journal/45af4031c1bdc072f1f045c25038675f) is 8.0M, max 153.6M, 145.6M free.
Mar  1 04:00:49 np0005634532 systemd-journald[697]: Received client request to flush runtime journal.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Coldplug All udev Devices.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Flush Journal to Persistent Storage.
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Load/Save OS Random Seed.
Mar  1 04:00:49 np0005634532 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Mar  1 04:00:49 np0005634532 systemd[1]: Finished Create System Users.
Mar  1 04:00:49 np0005634532 systemd[1]: Starting Create Static Device Nodes in /dev...
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Create Static Device Nodes in /dev.
Mar  1 04:00:50 np0005634532 systemd[1]: Reached target Preparation for Local File Systems.
Mar  1 04:00:50 np0005634532 systemd[1]: Reached target Local File Systems.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Mar  1 04:00:50 np0005634532 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Mar  1 04:00:50 np0005634532 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Mar  1 04:00:50 np0005634532 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Automatic Boot Loader Update...
Mar  1 04:00:50 np0005634532 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Create Volatile Files and Directories...
Mar  1 04:00:50 np0005634532 bootctl[713]: Couldn't find EFI system partition, skipping.
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Automatic Boot Loader Update.
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Create Volatile Files and Directories.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Security Auditing Service...
Mar  1 04:00:50 np0005634532 systemd[1]: Starting RPC Bind...
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Rebuild Journal Catalog...
Mar  1 04:00:50 np0005634532 auditd[719]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Mar  1 04:00:50 np0005634532 auditd[719]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Rebuild Journal Catalog.
Mar  1 04:00:50 np0005634532 systemd[1]: Started RPC Bind.
Mar  1 04:00:50 np0005634532 augenrules[724]: /sbin/augenrules: No change
Mar  1 04:00:50 np0005634532 augenrules[739]: No rules
Mar  1 04:00:50 np0005634532 augenrules[739]: enabled 1
Mar  1 04:00:50 np0005634532 augenrules[739]: failure 1
Mar  1 04:00:50 np0005634532 augenrules[739]: pid 719
Mar  1 04:00:50 np0005634532 augenrules[739]: rate_limit 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_limit 8192
Mar  1 04:00:50 np0005634532 augenrules[739]: lost 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog 3
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time 60000
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time_actual 0
Mar  1 04:00:50 np0005634532 augenrules[739]: enabled 1
Mar  1 04:00:50 np0005634532 augenrules[739]: failure 1
Mar  1 04:00:50 np0005634532 augenrules[739]: pid 719
Mar  1 04:00:50 np0005634532 augenrules[739]: rate_limit 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_limit 8192
Mar  1 04:00:50 np0005634532 augenrules[739]: lost 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog 1
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time 60000
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time_actual 0
Mar  1 04:00:50 np0005634532 augenrules[739]: enabled 1
Mar  1 04:00:50 np0005634532 augenrules[739]: failure 1
Mar  1 04:00:50 np0005634532 augenrules[739]: pid 719
Mar  1 04:00:50 np0005634532 augenrules[739]: rate_limit 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_limit 8192
Mar  1 04:00:50 np0005634532 augenrules[739]: lost 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog 0
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time 60000
Mar  1 04:00:50 np0005634532 augenrules[739]: backlog_wait_time_actual 0
Mar  1 04:00:50 np0005634532 systemd[1]: Started Security Auditing Service.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Rebuild Hardware Database.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Mar  1 04:00:50 np0005634532 systemd-udevd[747]: Using default interface naming scheme 'rhel-9.0'.
Mar  1 04:00:50 np0005634532 systemd[1]: Started Rule-based Manager for Device Events and Files.
Mar  1 04:00:50 np0005634532 systemd[1]: Starting Load Kernel Module configfs...
Mar  1 04:00:50 np0005634532 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Mar  1 04:00:50 np0005634532 systemd[1]: Finished Load Kernel Module configfs.
Mar  1 04:00:50 np0005634532 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Mar  1 04:00:50 np0005634532 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Mar  1 04:00:50 np0005634532 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Mar  1 04:00:50 np0005634532 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Mar  1 04:00:50 np0005634532 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Mar  1 04:00:50 np0005634532 systemd-udevd[762]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:00:50 np0005634532 kernel: kvm_amd: TSC scaling supported
Mar  1 04:00:50 np0005634532 kernel: kvm_amd: Nested Virtualization enabled
Mar  1 04:00:50 np0005634532 kernel: kvm_amd: Nested Paging enabled
Mar  1 04:00:50 np0005634532 kernel: kvm_amd: LBR virtualization supported
Mar  1 04:00:51 np0005634532 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Mar  1 04:00:51 np0005634532 systemd[1]: Starting Update is Completed...
Mar  1 04:00:51 np0005634532 systemd[1]: Finished Update is Completed.
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target System Initialization.
Mar  1 04:00:51 np0005634532 systemd[1]: Started dnf makecache --timer.
Mar  1 04:00:51 np0005634532 systemd[1]: Started Daily rotation of log files.
Mar  1 04:00:51 np0005634532 systemd[1]: Started Daily Cleanup of Temporary Directories.
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target Timer Units.
Mar  1 04:00:51 np0005634532 systemd[1]: Listening on D-Bus System Message Bus Socket.
Mar  1 04:00:51 np0005634532 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target Socket Units.
Mar  1 04:00:51 np0005634532 systemd[1]: Starting D-Bus System Message Bus...
Mar  1 04:00:51 np0005634532 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar  1 04:00:51 np0005634532 systemd[1]: Started D-Bus System Message Bus.
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target Basic System.
Mar  1 04:00:51 np0005634532 dbus-broker-lau[822]: Ready
Mar  1 04:00:51 np0005634532 systemd[1]: Starting NTP client/server...
Mar  1 04:00:51 np0005634532 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Mar  1 04:00:51 np0005634532 systemd[1]: Starting Restore /run/initramfs on shutdown...
Mar  1 04:00:51 np0005634532 systemd[1]: Starting IPv4 firewall with iptables...
Mar  1 04:00:51 np0005634532 systemd[1]: Started irqbalance daemon.
Mar  1 04:00:51 np0005634532 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Mar  1 04:00:51 np0005634532 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:00:51 np0005634532 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:00:51 np0005634532 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target sshd-keygen.target.
Mar  1 04:00:51 np0005634532 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Mar  1 04:00:51 np0005634532 systemd[1]: Reached target User and Group Name Lookups.
Mar  1 04:00:51 np0005634532 systemd[1]: Starting User Login Management...
Mar  1 04:00:51 np0005634532 systemd[1]: Finished Restore /run/initramfs on shutdown.
Mar  1 04:00:51 np0005634532 chronyd[841]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Mar  1 04:00:51 np0005634532 chronyd[841]: Loaded 0 symmetric keys
Mar  1 04:00:51 np0005634532 systemd-logind[832]: New seat seat0.
Mar  1 04:00:51 np0005634532 systemd-logind[832]: Watching system buttons on /dev/input/event0 (Power Button)
Mar  1 04:00:51 np0005634532 systemd-logind[832]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Mar  1 04:00:51 np0005634532 systemd[1]: Started User Login Management.
Mar  1 04:00:51 np0005634532 chronyd[841]: Using right/UTC timezone to obtain leap second data
Mar  1 04:00:51 np0005634532 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Mar  1 04:00:51 np0005634532 chronyd[841]: Loaded seccomp filter (level 2)
Mar  1 04:00:51 np0005634532 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Mar  1 04:00:51 np0005634532 systemd[1]: Started NTP client/server.
Mar  1 04:00:51 np0005634532 iptables.init[827]: iptables: Applying firewall rules: [  OK  ]
Mar  1 04:00:51 np0005634532 systemd[1]: Finished IPv4 firewall with iptables.
Mar  1 04:00:53 np0005634532 cloud-init[850]: Cloud-init v. 24.4-8.el9 running 'init-local' at Sun, 01 Mar 2026 09:00:53 +0000. Up 9.65 seconds.
Mar  1 04:00:53 np0005634532 systemd[1]: run-cloud\x2dinit-tmp-tmp1bjfie9w.mount: Deactivated successfully.
Mar  1 04:00:53 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 04:00:53 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 04:00:53 np0005634532 systemd-hostnamed[864]: Hostname set to <np0005634532.novalocal> (static)
Mar  1 04:00:53 np0005634532 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Mar  1 04:00:53 np0005634532 systemd[1]: Reached target Preparation for Network.
Mar  1 04:00:53 np0005634532 systemd[1]: Starting Network Manager...
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.8593] NetworkManager (version 1.54.3-2.el9) is starting... (boot:67233403-8d31-4a6b-a6aa-c5d04326d053)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.8598] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9059] manager[0x561d986c2000]: monitoring kernel firmware directory '/lib/firmware'.
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9124] hostname: hostname: using hostnamed
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9125] hostname: static hostname changed from (none) to "np0005634532.novalocal"
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9137] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9349] manager[0x561d986c2000]: rfkill: Wi-Fi hardware radio set enabled
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9350] manager[0x561d986c2000]: rfkill: WWAN hardware radio set enabled
Mar  1 04:00:53 np0005634532 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9651] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9654] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9655] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9656] manager: Networking is enabled by state file
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9659] settings: Loaded settings plugin: keyfile (internal)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9701] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9739] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9760] dhcp: init: Using DHCP client 'internal'
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9765] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9789] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9835] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:00:53 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9953] device (lo): Activation: starting connection 'lo' (c3703ce3-f4b8-446d-9fc7-2e82b0ccaf00)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9966] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Mar  1 04:00:53 np0005634532 NetworkManager[868]: <info>  [1772355653.9972] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:00:54 np0005634532 systemd[1]: Started Network Manager.
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0011] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0017] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0022] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0025] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0028] device (eth0): carrier: link connected
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0034] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 systemd[1]: Reached target Network.
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0045] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0056] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0064] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0066] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0071] manager: NetworkManager state is now CONNECTING
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0073] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0086] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0092] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0160] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0171] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Mar  1 04:00:54 np0005634532 systemd[1]: Starting Network Manager Wait Online...
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0199] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 systemd[1]: Starting GSSAPI Proxy Daemon...
Mar  1 04:00:54 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0457] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0462] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0464] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0474] device (lo): Activation: successful, device activated.
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0483] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0487] manager: NetworkManager state is now CONNECTED_SITE
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0492] device (eth0): Activation: successful, device activated.
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0499] manager: NetworkManager state is now CONNECTED_GLOBAL
Mar  1 04:00:54 np0005634532 NetworkManager[868]: <info>  [1772355654.0503] manager: startup complete
Mar  1 04:00:54 np0005634532 systemd[1]: Finished Network Manager Wait Online.
Mar  1 04:00:54 np0005634532 systemd[1]: Starting Cloud-init: Network Stage...
Mar  1 04:00:54 np0005634532 systemd[1]: Started GSSAPI Proxy Daemon.
Mar  1 04:00:54 np0005634532 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Mar  1 04:00:54 np0005634532 systemd[1]: Reached target NFS client services.
Mar  1 04:00:54 np0005634532 systemd[1]: Reached target Preparation for Remote File Systems.
Mar  1 04:00:54 np0005634532 systemd[1]: Reached target Remote File Systems.
Mar  1 04:00:54 np0005634532 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Mar  1 04:00:54 np0005634532 cloud-init[935]: Cloud-init v. 24.4-8.el9 running 'init' at Sun, 01 Mar 2026 09:00:54 +0000. Up 10.98 seconds.
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: ++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: | Device |  Up  |           Address           |      Mask     | Scope  |     Hw-Address    |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |  eth0  | True |         38.102.83.94        | 255.255.255.0 | global | fa:16:3e:41:01:df |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |  eth0  | True | fe80::f816:3eff:fe41:1df/64 |       .       |  link  | fa:16:3e:41:01:df |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   lo   | True |          127.0.0.1          |   255.0.0.0   |  host  |         .         |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   lo   | True |           ::1/128           |       .       |  host  |         .         |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +--------+------+-----------------------------+---------------+--------+-------------------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-------------+---------+-----------+-------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-------------+---------+-----------+-------+
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Mar  1 04:00:54 np0005634532 cloud-init[935]: ci-info: +-------+-------------+---------+-----------+-------+
Mar  1 04:00:58 np0005634532 cloud-init[935]: Generating public/private rsa key pair.
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key fingerprint is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: SHA256:fTVDkTfNyOoN+wd+ngEk5755eOsGSnDf6eCkgv5XIiE root@np0005634532.novalocal
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key's randomart image is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: +---[RSA 3072]----+
Mar  1 04:00:58 np0005634532 cloud-init[935]: |             .o=.|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |             .+.+|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |           . ++..|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |       E + .B. o |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        S =.o*. .|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |         . ++*+o |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        . o B+=o |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |       . . + o===|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |      ....o  o=Bo|
Mar  1 04:00:58 np0005634532 cloud-init[935]: +----[SHA256]-----+
Mar  1 04:00:58 np0005634532 cloud-init[935]: Generating public/private ecdsa key pair.
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key fingerprint is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: SHA256:n15RwWM7OIA9L/P50vm649/wY8qh4pE+gH9VycXkJn4 root@np0005634532.novalocal
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key's randomart image is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: +---[ECDSA 256]---+
Mar  1 04:00:58 np0005634532 cloud-init[935]: |         o   .+. |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        . +   =+ |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |           +.++= |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |          o ===  |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |      . S  +o+ E |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |     . . ..oo..  |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |      . .o+ .+.. |
Mar  1 04:00:58 np0005634532 cloud-init[935]: |       ..=..+ *=.|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        oo+. =***|
Mar  1 04:00:58 np0005634532 cloud-init[935]: +----[SHA256]-----+
Mar  1 04:00:58 np0005634532 cloud-init[935]: Generating public/private ed25519 key pair.
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Mar  1 04:00:58 np0005634532 cloud-init[935]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key fingerprint is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: SHA256:EzNdZu8ZfeTyRHEWENqrU88bx5ZCXs0/iZYuO0GVPDs root@np0005634532.novalocal
Mar  1 04:00:58 np0005634532 cloud-init[935]: The key's randomart image is:
Mar  1 04:00:58 np0005634532 cloud-init[935]: +--[ED25519 256]--+
Mar  1 04:00:58 np0005634532 cloud-init[935]: |            =o+oB|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |         . +o* *.|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        + ....* =|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |         + . E.Bo|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |        S .  ++.+|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |         . .= *.+|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |           o.* B=|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |           o+ ..=|
Mar  1 04:00:58 np0005634532 cloud-init[935]: |           .+. . |
Mar  1 04:00:58 np0005634532 cloud-init[935]: +----[SHA256]-----+
Mar  1 04:00:58 np0005634532 sm-notify[1018]: Version 2.5.4 starting
Mar  1 04:00:58 np0005634532 systemd[1]: Finished Cloud-init: Network Stage.
Mar  1 04:00:58 np0005634532 systemd[1]: Reached target Cloud-config availability.
Mar  1 04:00:58 np0005634532 systemd[1]: Reached target Network is Online.
Mar  1 04:00:58 np0005634532 systemd[1]: Starting Cloud-init: Config Stage...
Mar  1 04:00:58 np0005634532 systemd[1]: Starting Crash recovery kernel arming...
Mar  1 04:00:58 np0005634532 systemd[1]: Starting Notify NFS peers of a restart...
Mar  1 04:00:58 np0005634532 systemd[1]: Starting System Logging Service...
Mar  1 04:00:58 np0005634532 systemd[1]: Starting OpenSSH server daemon...
Mar  1 04:00:58 np0005634532 systemd[1]: Starting Permit User Sessions...
Mar  1 04:00:58 np0005634532 systemd[1]: Started Notify NFS peers of a restart.
Mar  1 04:00:58 np0005634532 systemd[1]: Finished Permit User Sessions.
Mar  1 04:00:58 np0005634532 systemd[1]: Started Command Scheduler.
Mar  1 04:00:58 np0005634532 systemd[1]: Started Getty on tty1.
Mar  1 04:00:58 np0005634532 systemd[1]: Started Serial Getty on ttyS0.
Mar  1 04:00:58 np0005634532 systemd[1]: Reached target Login Prompts.
Mar  1 04:00:58 np0005634532 systemd[1]: Started OpenSSH server daemon.
Mar  1 04:00:58 np0005634532 rsyslogd[1019]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1019" x-info="https://www.rsyslog.com"] start
Mar  1 04:00:58 np0005634532 rsyslogd[1019]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Mar  1 04:00:58 np0005634532 systemd[1]: Started System Logging Service.
Mar  1 04:00:58 np0005634532 systemd[1]: Reached target Multi-User System.
Mar  1 04:00:58 np0005634532 systemd[1]: Starting Record Runlevel Change in UTMP...
Mar  1 04:00:58 np0005634532 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Mar  1 04:00:58 np0005634532 systemd[1]: Finished Record Runlevel Change in UTMP.
Mar  1 04:00:58 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:00:58 np0005634532 cloud-init[1165]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Sun, 01 Mar 2026 09:00:58 +0000. Up 15.49 seconds.
Mar  1 04:00:58 np0005634532 chronyd[841]: Selected source 206.108.0.132 (2.centos.pool.ntp.org)
Mar  1 04:00:58 np0005634532 chronyd[841]: System clock TAI offset set to 37 seconds
Mar  1 04:00:58 np0005634532 systemd[1]: Finished Cloud-init: Config Stage.
Mar  1 04:00:58 np0005634532 kdumpctl[1035]: kdump: No kdump initial ramdisk found.
Mar  1 04:00:58 np0005634532 kdumpctl[1035]: kdump: Rebuilding /boot/initramfs-5.14.0-686.el9.x86_64kdump.img
Mar  1 04:00:59 np0005634532 systemd[1]: Starting Cloud-init: Final Stage...
Mar  1 04:00:59 np0005634532 cloud-init[1377]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Sun, 01 Mar 2026 09:00:59 +0000. Up 15.96 seconds.
Mar  1 04:00:59 np0005634532 cloud-init[1429]: #############################################################
Mar  1 04:00:59 np0005634532 cloud-init[1449]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Mar  1 04:00:59 np0005634532 cloud-init[1456]: 256 SHA256:n15RwWM7OIA9L/P50vm649/wY8qh4pE+gH9VycXkJn4 root@np0005634532.novalocal (ECDSA)
Mar  1 04:00:59 np0005634532 cloud-init[1467]: 256 SHA256:EzNdZu8ZfeTyRHEWENqrU88bx5ZCXs0/iZYuO0GVPDs root@np0005634532.novalocal (ED25519)
Mar  1 04:00:59 np0005634532 cloud-init[1474]: 3072 SHA256:fTVDkTfNyOoN+wd+ngEk5755eOsGSnDf6eCkgv5XIiE root@np0005634532.novalocal (RSA)
Mar  1 04:00:59 np0005634532 cloud-init[1477]: -----END SSH HOST KEY FINGERPRINTS-----
Mar  1 04:00:59 np0005634532 cloud-init[1480]: #############################################################
Mar  1 04:00:59 np0005634532 cloud-init[1377]: Cloud-init v. 24.4-8.el9 finished at Sun, 01 Mar 2026 09:00:59 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 16.22 seconds
Mar  1 04:00:59 np0005634532 systemd[1]: Finished Cloud-init: Final Stage.
Mar  1 04:00:59 np0005634532 systemd[1]: Reached target Cloud-init target.
Mar  1 04:00:59 np0005634532 dracut[1543]: dracut-057-110.git20260130.el9
Mar  1 04:00:59 np0005634532 dracut[1545]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/37391a25-080d-4723-8b0c-cb88a559875b /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-686.el9.x86_64kdump.img 5.14.0-686.el9.x86_64
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Mar  1 04:01:00 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 25 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 25 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 31 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 31 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 28 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 28 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 32 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 32 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 30 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 30 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 irqbalance[828]: Cannot change IRQ 29 affinity: Operation not permitted
Mar  1 04:01:01 np0005634532 irqbalance[828]: IRQ 29 affinity is now unmanaged
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: memstrack is not available
Mar  1 04:01:01 np0005634532 dracut[1545]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Mar  1 04:01:01 np0005634532 dracut[1545]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Mar  1 04:01:02 np0005634532 dracut[1545]: memstrack is not available
Mar  1 04:01:02 np0005634532 dracut[1545]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Mar  1 04:01:02 np0005634532 dracut[1545]: *** Including module: systemd ***
Mar  1 04:01:02 np0005634532 dracut[1545]: *** Including module: fips ***
Mar  1 04:01:03 np0005634532 dracut[1545]: *** Including module: systemd-initrd ***
Mar  1 04:01:03 np0005634532 dracut[1545]: *** Including module: i18n ***
Mar  1 04:01:03 np0005634532 dracut[1545]: *** Including module: drm ***
Mar  1 04:01:03 np0005634532 dracut[1545]: *** Including module: prefixdevname ***
Mar  1 04:01:03 np0005634532 dracut[1545]: *** Including module: kernel-modules ***
Mar  1 04:01:03 np0005634532 kernel: block vda: the capability attribute has been deprecated.
Mar  1 04:01:04 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: kernel-modules-extra ***
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: qemu ***
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: fstab-sys ***
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: rootfs-block ***
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: terminfo ***
Mar  1 04:01:04 np0005634532 dracut[1545]: *** Including module: udev-rules ***
Mar  1 04:01:05 np0005634532 dracut[1545]: Skipping udev rule: 91-permissions.rules
Mar  1 04:01:05 np0005634532 dracut[1545]: Skipping udev rule: 80-drivers-modprobe.rules
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: virtiofs ***
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: dracut-systemd ***
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: usrmount ***
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: base ***
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: fs-lib ***
Mar  1 04:01:05 np0005634532 dracut[1545]: *** Including module: kdumpbase ***
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Including module: microcode_ctl-fw_dir_override ***
Mar  1 04:01:06 np0005634532 dracut[1545]:  microcode_ctl module: mangling fw_dir
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Mar  1 04:01:06 np0005634532 dracut[1545]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Including module: openssl ***
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Including module: shutdown ***
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Including module: squash ***
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Including modules done ***
Mar  1 04:01:06 np0005634532 dracut[1545]: *** Installing kernel module dependencies ***
Mar  1 04:01:07 np0005634532 dracut[1545]: *** Installing kernel module dependencies done ***
Mar  1 04:01:07 np0005634532 dracut[1545]: *** Resolving executable dependencies ***
Mar  1 04:01:09 np0005634532 dracut[1545]: *** Resolving executable dependencies done ***
Mar  1 04:01:09 np0005634532 dracut[1545]: *** Generating early-microcode cpio image ***
Mar  1 04:01:09 np0005634532 dracut[1545]: *** Store current command line parameters ***
Mar  1 04:01:09 np0005634532 dracut[1545]: Stored kernel commandline:
Mar  1 04:01:09 np0005634532 dracut[1545]: No dracut internal kernel commandline stored in the initramfs
Mar  1 04:01:10 np0005634532 dracut[1545]: *** Install squash loader ***
Mar  1 04:01:11 np0005634532 dracut[1545]: *** Squashing the files inside the initramfs ***
Mar  1 04:01:12 np0005634532 dracut[1545]: *** Squashing the files inside the initramfs done ***
Mar  1 04:01:12 np0005634532 dracut[1545]: *** Creating image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' ***
Mar  1 04:01:12 np0005634532 dracut[1545]: *** Hardlinking files ***
Mar  1 04:01:12 np0005634532 dracut[1545]: *** Hardlinking files done ***
Mar  1 04:01:13 np0005634532 dracut[1545]: *** Creating initramfs image file '/boot/initramfs-5.14.0-686.el9.x86_64kdump.img' done ***
Mar  1 04:01:13 np0005634532 kdumpctl[1035]: kdump: kexec: loaded kdump kernel
Mar  1 04:01:13 np0005634532 kdumpctl[1035]: kdump: Starting kdump: [OK]
Mar  1 04:01:13 np0005634532 systemd[1]: Finished Crash recovery kernel arming.
Mar  1 04:01:13 np0005634532 systemd[1]: Startup finished in 1.534s (kernel) + 3.446s (initrd) + 25.498s (userspace) = 30.480s.
Mar  1 04:01:23 np0005634532 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar  1 04:02:45 np0005634532 systemd[1]: Created slice User Slice of UID 1000.
Mar  1 04:02:45 np0005634532 systemd[1]: Starting User Runtime Directory /run/user/1000...
Mar  1 04:02:45 np0005634532 systemd-logind[832]: New session 1 of user zuul.
Mar  1 04:02:45 np0005634532 systemd[1]: Finished User Runtime Directory /run/user/1000.
Mar  1 04:02:45 np0005634532 systemd[1]: Starting User Manager for UID 1000...
Mar  1 04:02:45 np0005634532 systemd[4818]: Queued start job for default target Main User Target.
Mar  1 04:02:45 np0005634532 systemd[4818]: Created slice User Application Slice.
Mar  1 04:02:45 np0005634532 systemd[4818]: Started Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:02:45 np0005634532 systemd[4818]: Started Daily Cleanup of User's Temporary Directories.
Mar  1 04:02:45 np0005634532 systemd[4818]: Reached target Paths.
Mar  1 04:02:45 np0005634532 systemd[4818]: Reached target Timers.
Mar  1 04:02:45 np0005634532 systemd[4818]: Starting D-Bus User Message Bus Socket...
Mar  1 04:02:45 np0005634532 systemd[4818]: Starting Create User's Volatile Files and Directories...
Mar  1 04:02:45 np0005634532 systemd[4818]: Listening on D-Bus User Message Bus Socket.
Mar  1 04:02:45 np0005634532 systemd[4818]: Reached target Sockets.
Mar  1 04:02:45 np0005634532 systemd[4818]: Finished Create User's Volatile Files and Directories.
Mar  1 04:02:45 np0005634532 systemd[4818]: Reached target Basic System.
Mar  1 04:02:45 np0005634532 systemd[4818]: Reached target Main User Target.
Mar  1 04:02:45 np0005634532 systemd[4818]: Startup finished in 152ms.
Mar  1 04:02:45 np0005634532 systemd[1]: Started User Manager for UID 1000.
Mar  1 04:02:45 np0005634532 systemd[1]: Started Session 1 of User zuul.
Mar  1 04:02:46 np0005634532 python3[4900]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:02:51 np0005634532 irqbalance[828]: Cannot change IRQ 27 affinity: Operation not permitted
Mar  1 04:02:51 np0005634532 irqbalance[828]: IRQ 27 affinity is now unmanaged
Mar  1 04:02:55 np0005634532 python3[4928]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:03:03 np0005634532 python3[4986]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:03:04 np0005634532 python3[5026]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Mar  1 04:03:07 np0005634532 python3[5052]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAbVp5OjSoi5chsTgVhYXHX+CtQLS4bHOnHtPuN2myiZd5qcK6I3gYxU3Vl6o5gjWjBhskjFTkfON37u4quVAbHZJjn2HAlgpyy5xtF62gwVffdc7vRsXQU2rY0WB0z8FpbAR8SGTMRqCKJMrP8fr5f8Zg3+FFtYEr8+DOozgxBZ6U+AZ6AivG4kaZ/mNs7PehaKpUzveBMWV2JJFSD28EVPTLXRy+9a4q3OkoQEBKTqxV1F+yWckFXpwI1BLvb2Hdg/ytY2loz1YadT1d9SWVJe68cLE54sLyBgZ+AVmDczPJ6yqysm3Lv3YbdSiE6iTGzYZXL85RHhtwApnkewH2k62HQ0SAPIVVVWyxsYWJKQ7h8ShTKhlo7TZrzOWRy3MTcMZUIPzTrJ8fEsS3pyJr/jYQECf21rogeYIjFrl4nzhc2GKVnMklVKvJGmoVzH35VSHQGeMaqghynyez1F3/l/wMPucC8ZyMEYI6h1Pp2afjfsY4bKmBxFHG0uJxgNs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:08 np0005634532 python3[5076]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:08 np0005634532 python3[5175]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:09 np0005634532 python3[5246]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772355788.3202598-251-24195632218592/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=0ca03a014d3844828e33ae8e22392c09_id_rsa follow=False checksum=6c6d2f6c0f9b44220202f8b1af5b45a9056e891a backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:09 np0005634532 python3[5369]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:10 np0005634532 python3[5440]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772355789.5580585-306-209003715774069/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=0ca03a014d3844828e33ae8e22392c09_id_rsa.pub follow=False checksum=1c632bcd21ecdc2cd6d48e1a3a68dceae15f9472 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:11 np0005634532 python3[5488]: ansible-ping Invoked with data=pong
Mar  1 04:03:12 np0005634532 python3[5512]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:03:15 np0005634532 python3[5570]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Mar  1 04:03:16 np0005634532 python3[5602]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:16 np0005634532 python3[5626]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:16 np0005634532 python3[5650]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:17 np0005634532 python3[5674]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:17 np0005634532 python3[5698]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:17 np0005634532 python3[5722]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:19 np0005634532 python3[5748]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:20 np0005634532 python3[5826]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:21 np0005634532 python3[5899]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1772355799.9833648-31-30095610853090/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:22 np0005634532 python3[5947]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:22 np0005634532 python3[5971]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:23 np0005634532 python3[5995]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:23 np0005634532 python3[6019]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:23 np0005634532 python3[6043]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:24 np0005634532 python3[6067]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:24 np0005634532 python3[6091]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:24 np0005634532 python3[6115]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:25 np0005634532 python3[6139]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:25 np0005634532 python3[6163]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:25 np0005634532 python3[6187]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:26 np0005634532 python3[6211]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:26 np0005634532 python3[6235]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICWBreHW95Wz2Toz5YwCGQwFcUG8oFYkienDh9tntmDc ralfieri@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:26 np0005634532 python3[6259]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:27 np0005634532 python3[6283]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:27 np0005634532 python3[6307]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:27 np0005634532 python3[6331]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:27 np0005634532 python3[6355]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:28 np0005634532 python3[6379]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:28 np0005634532 python3[6403]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:28 np0005634532 python3[6427]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:29 np0005634532 python3[6451]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:29 np0005634532 python3[6475]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:29 np0005634532 python3[6499]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:30 np0005634532 python3[6523]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:30 np0005634532 python3[6547]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:03:31 np0005634532 python3[6573]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Mar  1 04:03:32 np0005634532 systemd[1]: Starting Time & Date Service...
Mar  1 04:03:32 np0005634532 systemd[1]: Started Time & Date Service.
Mar  1 04:03:32 np0005634532 systemd-timedated[6575]: Changed time zone to 'UTC' (UTC).
Mar  1 04:03:34 np0005634532 python3[6604]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:34 np0005634532 python3[6680]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:34 np0005634532 python3[6751]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1772355814.13461-251-178782278161248/source _original_basename=tmpryhf87aj follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:35 np0005634532 python3[6851]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:35 np0005634532 python3[6922]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772355815.1636708-301-142611087459954/source _original_basename=tmpr9cyfbd0 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:37 np0005634532 python3[7024]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:37 np0005634532 python3[7097]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1772355816.7326236-381-48713482603020/source _original_basename=tmp5dqb9_3y follow=False checksum=de28d19618025176a7a65eba0e40c742fe7af9f4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:38 np0005634532 python3[7145]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:03:38 np0005634532 python3[7171]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:03:38 np0005634532 python3[7251]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:03:39 np0005634532 python3[7324]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1772355818.6657696-451-230079074677313/source _original_basename=tmpr4vcy0ze follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:40 np0005634532 python3[7375]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-d36f-dba2-00000000001f-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:03:41 np0005634532 irqbalance[828]: Cannot change IRQ 26 affinity: Operation not permitted
Mar  1 04:03:41 np0005634532 irqbalance[828]: IRQ 26 affinity is now unmanaged
Mar  1 04:03:41 np0005634532 python3[7403]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163efc-24cc-d36f-dba2-000000000020-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Mar  1 04:03:42 np0005634532 python3[7431]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:03:59 np0005634532 python3[7457]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:04:02 np0005634532 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Mar  1 04:04:37 np0005634532 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Mar  1 04:04:37 np0005634532 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4024] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Mar  1 04:04:37 np0005634532 systemd-udevd[7462]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4168] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4206] settings: (eth1): created default wired connection 'Wired connection 1'
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4211] device (eth1): carrier: link connected
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4216] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4225] policy: auto-activating connection 'Wired connection 1' (fcd1fd55-dd6d-3098-b03a-e2e1ca621882)
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4233] device (eth1): Activation: starting connection 'Wired connection 1' (fcd1fd55-dd6d-3098-b03a-e2e1ca621882)
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4236] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4241] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4248] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:04:37 np0005634532 NetworkManager[868]: <info>  [1772355877.4256] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:04:38 np0005634532 python3[7489]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-21cb-324b-000000000128-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:04:48 np0005634532 python3[7569]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:04:49 np0005634532 python3[7642]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772355888.3724525-104-150100776506312/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=d5ff63e75bc41ee81ef959fc80f50eefcf3a2afe backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:04:49 np0005634532 python3[7692]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:04:49 np0005634532 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Mar  1 04:04:49 np0005634532 systemd[1]: Stopped Network Manager Wait Online.
Mar  1 04:04:49 np0005634532 systemd[1]: Stopping Network Manager Wait Online...
Mar  1 04:04:49 np0005634532 systemd[1]: Stopping Network Manager...
Mar  1 04:04:49 np0005634532 NetworkManager[868]: <info>  [1772355889.9934] caught SIGTERM, shutting down normally.
Mar  1 04:04:49 np0005634532 NetworkManager[868]: <info>  [1772355889.9947] dhcp4 (eth0): canceled DHCP transaction
Mar  1 04:04:49 np0005634532 NetworkManager[868]: <info>  [1772355889.9948] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:04:49 np0005634532 NetworkManager[868]: <info>  [1772355889.9948] dhcp4 (eth0): state changed no lease
Mar  1 04:04:49 np0005634532 NetworkManager[868]: <info>  [1772355889.9953] manager: NetworkManager state is now CONNECTING
Mar  1 04:04:50 np0005634532 NetworkManager[868]: <info>  [1772355890.0050] dhcp4 (eth1): canceled DHCP transaction
Mar  1 04:04:50 np0005634532 NetworkManager[868]: <info>  [1772355890.0050] dhcp4 (eth1): state changed no lease
Mar  1 04:04:50 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:04:50 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:04:50 np0005634532 NetworkManager[868]: <info>  [1772355890.8131] exiting (success)
Mar  1 04:04:50 np0005634532 systemd[1]: NetworkManager.service: Deactivated successfully.
Mar  1 04:04:50 np0005634532 systemd[1]: Stopped Network Manager.
Mar  1 04:04:50 np0005634532 systemd[1]: NetworkManager.service: Consumed 1.508s CPU time, 10.1M memory peak.
Mar  1 04:04:50 np0005634532 systemd[1]: Starting Network Manager...
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.8622] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67233403-8d31-4a6b-a6aa-c5d04326d053)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.8623] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.8673] manager[0x55ae3d778000]: monitoring kernel firmware directory '/lib/firmware'.
Mar  1 04:04:50 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 04:04:50 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9555] hostname: hostname: using hostnamed
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9558] hostname: static hostname changed from (none) to "np0005634532.novalocal"
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9567] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9573] manager[0x55ae3d778000]: rfkill: Wi-Fi hardware radio set enabled
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9573] manager[0x55ae3d778000]: rfkill: WWAN hardware radio set enabled
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9614] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9614] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9615] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9616] manager: Networking is enabled by state file
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9618] settings: Loaded settings plugin: keyfile (internal)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9629] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9664] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9674] dhcp: init: Using DHCP client 'internal'
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9677] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9683] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9689] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9697] device (lo): Activation: starting connection 'lo' (c3703ce3-f4b8-446d-9fc7-2e82b0ccaf00)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9703] device (eth0): carrier: link connected
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9707] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9713] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9713] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9724] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9732] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9737] device (eth1): carrier: link connected
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9741] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9747] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (fcd1fd55-dd6d-3098-b03a-e2e1ca621882) (indicated)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9747] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9753] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9761] device (eth1): Activation: starting connection 'Wired connection 1' (fcd1fd55-dd6d-3098-b03a-e2e1ca621882)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9768] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Mar  1 04:04:50 np0005634532 systemd[1]: Started Network Manager.
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9773] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9776] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9779] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9781] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9785] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9788] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9792] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9794] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9803] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9806] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9813] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9815] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9837] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9839] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9843] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9848] device (lo): Activation: successful, device activated.
Mar  1 04:04:50 np0005634532 NetworkManager[7709]: <info>  [1772355890.9858] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Mar  1 04:04:50 np0005634532 systemd[1]: Starting Network Manager Wait Online...
Mar  1 04:04:51 np0005634532 python3[7757]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-21cb-324b-0000000000bd-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.6216] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.7616] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.7618] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.7622] manager: NetworkManager state is now CONNECTED_SITE
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.7628] device (eth0): Activation: successful, device activated.
Mar  1 04:04:51 np0005634532 NetworkManager[7709]: <info>  [1772355891.7633] manager: NetworkManager state is now CONNECTED_GLOBAL
Mar  1 04:05:01 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:05:07 np0005634532 systemd[4818]: Starting Mark boot as successful...
Mar  1 04:05:07 np0005634532 systemd[4818]: Finished Mark boot as successful.
Mar  1 04:05:20 np0005634532 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3567] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Mar  1 04:05:36 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:05:36 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3947] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3951] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3955] device (eth1): Activation: successful, device activated.
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3961] manager: startup complete
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3963] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <warn>  [1772355936.3966] device (eth1): Activation: failed for connection 'Wired connection 1'
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.3973] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 systemd[1]: Finished Network Manager Wait Online.
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4060] dhcp4 (eth1): canceled DHCP transaction
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4061] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4061] dhcp4 (eth1): state changed no lease
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4072] policy: auto-activating connection 'ci-private-network' (df70c8b1-de1e-586c-a971-ac86ce783505)
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4077] device (eth1): Activation: starting connection 'ci-private-network' (df70c8b1-de1e-586c-a971-ac86ce783505)
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4077] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4080] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4086] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.4093] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.6573] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.6581] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:05:36 np0005634532 NetworkManager[7709]: <info>  [1772355936.6592] device (eth1): Activation: successful, device activated.
Mar  1 04:05:46 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:05:51 np0005634532 systemd-logind[832]: Session 1 logged out. Waiting for processes to exit.
Mar  1 04:06:48 np0005634532 systemd-logind[832]: New session 3 of user zuul.
Mar  1 04:06:48 np0005634532 systemd[1]: Started Session 3 of User zuul.
Mar  1 04:06:49 np0005634532 python3[7888]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:06:49 np0005634532 python3[7961]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772356008.9220946-373-142340199846437/source _original_basename=tmpwxgsqji_ follow=False checksum=03a2a2a0225f79d858e128bee1cb61495e528c70 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:06:53 np0005634532 systemd[1]: session-3.scope: Deactivated successfully.
Mar  1 04:06:53 np0005634532 systemd-logind[832]: Session 3 logged out. Waiting for processes to exit.
Mar  1 04:06:53 np0005634532 systemd-logind[832]: Removed session 3.
Mar  1 04:08:07 np0005634532 systemd[4818]: Created slice User Background Tasks Slice.
Mar  1 04:08:07 np0005634532 systemd[4818]: Starting Cleanup of User's Temporary Files and Directories...
Mar  1 04:08:07 np0005634532 systemd[4818]: Finished Cleanup of User's Temporary Files and Directories.
Mar  1 04:15:50 np0005634532 systemd[1]: Starting Cleanup of Temporary Directories...
Mar  1 04:15:50 np0005634532 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Mar  1 04:15:50 np0005634532 systemd[1]: Finished Cleanup of Temporary Directories.
Mar  1 04:15:50 np0005634532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Mar  1 04:16:11 np0005634532 systemd-logind[832]: New session 4 of user zuul.
Mar  1 04:16:11 np0005634532 systemd[1]: Started Session 4 of User zuul.
Mar  1 04:16:11 np0005634532 python3[8046]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163efc-24cc-25ea-46d9-0000000021d2-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:11 np0005634532 python3[8074]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:12 np0005634532 python3[8101]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:12 np0005634532 python3[8127]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:12 np0005634532 python3[8153]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:13 np0005634532 python3[8179]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:13 np0005634532 python3[8257]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:16:14 np0005634532 python3[8330]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772356573.3783953-562-40676956709880/source _original_basename=tmpvpo43qz6 follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:16:14 np0005634532 python3[8380]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:16:14 np0005634532 systemd[1]: Reloading.
Mar  1 04:16:15 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:16:16 np0005634532 python3[8443]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Mar  1 04:16:17 np0005634532 python3[8469]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:17 np0005634532 python3[8497]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:17 np0005634532 python3[8525]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:17 np0005634532 python3[8553]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:18 np0005634532 python3[8580]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163efc-24cc-25ea-46d9-0000000021d9-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:16:18 np0005634532 python3[8610]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:16:21 np0005634532 systemd[1]: session-4.scope: Deactivated successfully.
Mar  1 04:16:21 np0005634532 systemd[1]: session-4.scope: Consumed 4.179s CPU time.
Mar  1 04:16:21 np0005634532 systemd-logind[832]: Session 4 logged out. Waiting for processes to exit.
Mar  1 04:16:21 np0005634532 systemd-logind[832]: Removed session 4.
Mar  1 04:16:23 np0005634532 systemd-logind[832]: New session 5 of user zuul.
Mar  1 04:16:23 np0005634532 systemd[1]: Started Session 5 of User zuul.
Mar  1 04:16:23 np0005634532 python3[8645]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Mar  1 04:16:30 np0005634532 setsebool[8686]: The virt_use_nfs policy boolean was changed to 1 by root
Mar  1 04:16:30 np0005634532 setsebool[8686]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Mar  1 04:16:40 np0005634532 kernel: SELinux:  Converting 386 SID table entries...
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:16:40 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  Converting 389 SID table entries...
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:16:49 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:17:07 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Mar  1 04:17:07 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:17:07 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:17:07 np0005634532 systemd[1]: Reloading.
Mar  1 04:17:07 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:17:07 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:17:11 np0005634532 python3[12567]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163efc-24cc-9b21-b4f5-00000000000c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:17:11 np0005634532 kernel: evm: overlay not supported
Mar  1 04:17:11 np0005634532 systemd[4818]: Starting D-Bus User Message Bus...
Mar  1 04:17:11 np0005634532 dbus-broker-launch[13391]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Mar  1 04:17:11 np0005634532 dbus-broker-launch[13391]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Mar  1 04:17:11 np0005634532 systemd[4818]: Started D-Bus User Message Bus.
Mar  1 04:17:11 np0005634532 dbus-broker-lau[13391]: Ready
Mar  1 04:17:11 np0005634532 systemd[4818]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Mar  1 04:17:11 np0005634532 systemd[4818]: Created slice Slice /user.
Mar  1 04:17:11 np0005634532 systemd[4818]: podman-13279.scope: unit configures an IP firewall, but not running as root.
Mar  1 04:17:11 np0005634532 systemd[4818]: (This warning is only shown for the first unit using IP firewalling.)
Mar  1 04:17:11 np0005634532 systemd[4818]: Started podman-13279.scope.
Mar  1 04:17:12 np0005634532 systemd[4818]: Started podman-pause-e0338109.scope.
Mar  1 04:17:12 np0005634532 systemd[1]: session-5.scope: Deactivated successfully.
Mar  1 04:17:12 np0005634532 systemd[1]: session-5.scope: Consumed 40.676s CPU time.
Mar  1 04:17:12 np0005634532 systemd-logind[832]: Session 5 logged out. Waiting for processes to exit.
Mar  1 04:17:12 np0005634532 systemd-logind[832]: Removed session 5.
Mar  1 04:17:39 np0005634532 systemd-logind[832]: New session 6 of user zuul.
Mar  1 04:17:39 np0005634532 systemd[1]: Started Session 6 of User zuul.
Mar  1 04:17:39 np0005634532 python3[26460]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN0OEP0U8Tn4ljun03rIRSN+Psy7QQ2UcUBBWf+li6xNjRJYr0inpPoOPLHiEHomwno8QyyKKGcywBewJmn7pM8= zuul@np0005634531.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:17:40 np0005634532 python3[26656]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN0OEP0U8Tn4ljun03rIRSN+Psy7QQ2UcUBBWf+li6xNjRJYr0inpPoOPLHiEHomwno8QyyKKGcywBewJmn7pM8= zuul@np0005634531.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:17:40 np0005634532 python3[27091]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005634532.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Mar  1 04:17:41 np0005634532 python3[27337]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN0OEP0U8Tn4ljun03rIRSN+Psy7QQ2UcUBBWf+li6xNjRJYr0inpPoOPLHiEHomwno8QyyKKGcywBewJmn7pM8= zuul@np0005634531.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Mar  1 04:17:42 np0005634532 python3[27625]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:17:42 np0005634532 python3[27930]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772356661.7283168-150-38831318690522/source _original_basename=tmpspb1tv98 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:17:43 np0005634532 python3[28281]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Mar  1 04:17:43 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 04:17:43 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 04:17:43 np0005634532 systemd-hostnamed[28401]: Changed pretty hostname to 'compute-0'
Mar  1 04:17:43 np0005634532 systemd-hostnamed[28401]: Hostname set to <compute-0> (static)
Mar  1 04:17:43 np0005634532 NetworkManager[7709]: <info>  [1772356663.6856] hostname: static hostname changed from "np0005634532.novalocal" to "compute-0"
Mar  1 04:17:43 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:17:43 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:17:44 np0005634532 systemd[1]: session-6.scope: Deactivated successfully.
Mar  1 04:17:44 np0005634532 systemd[1]: session-6.scope: Consumed 2.429s CPU time.
Mar  1 04:17:44 np0005634532 systemd-logind[832]: Session 6 logged out. Waiting for processes to exit.
Mar  1 04:17:44 np0005634532 systemd-logind[832]: Removed session 6.
Mar  1 04:17:48 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:17:48 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:17:48 np0005634532 systemd[1]: man-db-cache-update.service: Consumed 48.688s CPU time.
Mar  1 04:17:48 np0005634532 systemd[1]: run-r7c315a2a84b14af5a94ecef936b86628.service: Deactivated successfully.
Mar  1 04:17:53 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:18:13 np0005634532 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar  1 04:21:05 np0005634532 systemd-logind[832]: New session 7 of user zuul.
Mar  1 04:21:05 np0005634532 systemd[1]: Started Session 7 of User zuul.
Mar  1 04:21:06 np0005634532 python3[30666]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:21:07 np0005634532 python3[30782]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:08 np0005634532 python3[30855]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=delorean.repo follow=False checksum=c7624fe5e858d4139de1ac159778eb6fd097c2ca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:08 np0005634532 python3[30881]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:08 np0005634532 python3[30954]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:08 np0005634532 python3[30980]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:09 np0005634532 python3[31053]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:09 np0005634532 python3[31079]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:09 np0005634532 python3[31152]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:10 np0005634532 python3[31178]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:10 np0005634532 python3[31251]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:10 np0005634532 python3[31277]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:11 np0005634532 python3[31350]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:11 np0005634532 python3[31376]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:21:11 np0005634532 python3[31449]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1772356867.3778462-34643-149495207561684/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=06a0a916cb7cbc51b08d6616a672f1322305cccf backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:21:21 np0005634532 python3[31507]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:26:21 np0005634532 systemd[1]: session-7.scope: Deactivated successfully.
Mar  1 04:26:21 np0005634532 systemd[1]: session-7.scope: Consumed 4.785s CPU time.
Mar  1 04:26:21 np0005634532 systemd-logind[832]: Session 7 logged out. Waiting for processes to exit.
Mar  1 04:26:21 np0005634532 systemd-logind[832]: Removed session 7.
Mar  1 04:32:41 np0005634532 systemd-logind[832]: New session 8 of user zuul.
Mar  1 04:32:42 np0005634532 systemd[1]: Started Session 8 of User zuul.
Mar  1 04:32:42 np0005634532 python3.9[31783]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:32:44 np0005634532 python3.9[31969]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:32:51 np0005634532 systemd[1]: session-8.scope: Deactivated successfully.
Mar  1 04:32:51 np0005634532 systemd[1]: session-8.scope: Consumed 7.636s CPU time.
Mar  1 04:32:51 np0005634532 systemd-logind[832]: Session 8 logged out. Waiting for processes to exit.
Mar  1 04:32:51 np0005634532 systemd-logind[832]: Removed session 8.
Mar  1 04:33:07 np0005634532 systemd-logind[832]: New session 9 of user zuul.
Mar  1 04:33:07 np0005634532 systemd[1]: Started Session 9 of User zuul.
Mar  1 04:33:08 np0005634532 python3.9[32190]: ansible-ansible.legacy.ping Invoked with data=pong
Mar  1 04:33:09 np0005634532 python3.9[32364]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:33:10 np0005634532 python3.9[32517]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:33:11 np0005634532 python3.9[32671]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:33:12 np0005634532 python3.9[32824]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:33:13 np0005634532 python3.9[32979]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:33:13 np0005634532 python3.9[33103]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357592.76168-172-160305521203743/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:33:14 np0005634532 python3.9[33256]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:33:15 np0005634532 python3.9[33413]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:33:16 np0005634532 python3.9[33566]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:33:17 np0005634532 python3.9[33716]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:33:21 np0005634532 python3.9[33972]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:33:21 np0005634532 python3.9[34122]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:33:22 np0005634532 python3.9[34276]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:33:23 np0005634532 python3.9[34435]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:33:24 np0005634532 python3.9[34520]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:34:10 np0005634532 systemd[1]: Reloading.
Mar  1 04:34:10 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:34:10 np0005634532 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Mar  1 04:34:10 np0005634532 systemd[1]: Reloading.
Mar  1 04:34:10 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:34:10 np0005634532 systemd[1]: Starting dnf makecache...
Mar  1 04:34:10 np0005634532 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Mar  1 04:34:10 np0005634532 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Mar  1 04:34:10 np0005634532 systemd[1]: Reloading.
Mar  1 04:34:10 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:34:11 np0005634532 dnf[34799]: Failed determining last makecache time.
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-barbican-42b4c41831408a8e323 149 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 systemd[1]: Listening on LVM2 poll daemon socket.
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-glean-642fffe0203a8ffcc2443db52 175 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-cinder-e95a374f4f00ef02d562d 158 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-stevedore-c4acc5639fd2329372142 190 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-cloudkitty-tests-tempest-ef9563 192 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-diskimage-builder-cbb4478c143869181ba9 198 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-nova-5cfeecbf22fca58822607dd 191 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-designate-tests-tempest-347fdbc 191 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:34:11 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:34:11 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-glance-1fd12c29b339f30fe823e 159 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 182 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-manila-8fa2b5793100022b4d0f6 177 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-whitebox-neutron-tests-tempest- 181 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-octavia-76dfc1e35cf7f4dd6102 163 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-watcher-c014f81a8647287f6dcc 161 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-tcib-b403f1051724db0286e1418f59 154 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 182 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-swift-dc98a8463506ac520c469a 184 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-python-tempestconf-8e33668cda707818ee1 164 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: delorean-openstack-heat-ui-013accbfd179753bc3f0 166 kB/s | 3.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: CentOS Stream 9 - BaseOS                         57 kB/s | 7.0 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: CentOS Stream 9 - AppStream                      54 kB/s | 7.1 kB     00:00
Mar  1 04:34:11 np0005634532 dnf[34799]: CentOS Stream 9 - CRB                            74 kB/s | 6.9 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: CentOS Stream 9 - Extras packages                59 kB/s | 7.6 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: dlrn-antelope-testing                           137 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: dlrn-antelope-build-deps                        144 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: centos9-rabbitmq                                120 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: centos9-storage                                 121 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: centos9-opstools                                154 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: NFV SIG OpenvSwitch                             134 kB/s | 3.0 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: repo-setup-centos-appstream                     170 kB/s | 4.4 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: repo-setup-centos-baseos                        150 kB/s | 3.9 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: repo-setup-centos-highavailability              159 kB/s | 3.9 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: repo-setup-centos-powertools                    203 kB/s | 4.3 kB     00:00
Mar  1 04:34:12 np0005634532 dnf[34799]: Extra Packages for Enterprise Linux 9 - x86_64   98 kB/s |  31 kB     00:00
Mar  1 04:34:13 np0005634532 dnf[34799]: Metadata cache created.
Mar  1 04:34:13 np0005634532 systemd[1]: dnf-makecache.service: Deactivated successfully.
Mar  1 04:34:13 np0005634532 systemd[1]: Finished dnf makecache.
Mar  1 04:34:13 np0005634532 systemd[1]: dnf-makecache.service: Consumed 1.779s CPU time.
Mar  1 04:35:13 np0005634532 kernel: SELinux:  Converting 2726 SID table entries...
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:35:13 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:35:13 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Mar  1 04:35:13 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:35:13 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:35:13 np0005634532 systemd[1]: Reloading.
Mar  1 04:35:13 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:35:13 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:35:14 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:35:14 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:35:14 np0005634532 systemd[1]: man-db-cache-update.service: Consumed 1.086s CPU time.
Mar  1 04:35:14 np0005634532 systemd[1]: run-rca0b6dd1d66344b0bb0d719cb180e5b9.service: Deactivated successfully.
Mar  1 04:35:14 np0005634532 python3.9[36141]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:35:16 np0005634532 python3.9[36424]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Mar  1 04:35:17 np0005634532 python3.9[36577]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Mar  1 04:35:19 np0005634532 python3.9[36733]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:35:20 np0005634532 python3.9[36886]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Mar  1 04:35:21 np0005634532 python3.9[37039]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:35:22 np0005634532 python3.9[37192]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:35:23 np0005634532 python3.9[37316]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357722.3503652-661-122457868845669/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:35:24 np0005634532 python3.9[37469]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:35:29 np0005634532 python3.9[37622]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:35:29 np0005634532 python3.9[37776]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:35:31 np0005634532 python3.9[37929]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Mar  1 04:35:31 np0005634532 python3.9[38083]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:35:31 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:35:32 np0005634532 python3.9[38243]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Mar  1 04:35:33 np0005634532 python3.9[38404]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Mar  1 04:35:34 np0005634532 python3.9[38558]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:35:35 np0005634532 python3.9[38717]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Mar  1 04:35:35 np0005634532 python3.9[38870]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:35:38 np0005634532 python3.9[39029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:35:38 np0005634532 python3.9[39182]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:35:39 np0005634532 python3.9[39306]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772357738.4752746-1018-128299837984261/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:35:40 np0005634532 python3.9[39459]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:35:40 np0005634532 systemd[1]: Starting Load Kernel Modules...
Mar  1 04:35:40 np0005634532 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Mar  1 04:35:40 np0005634532 kernel: Bridge firewalling registered
Mar  1 04:35:40 np0005634532 systemd-modules-load[39463]: Inserted module 'br_netfilter'
Mar  1 04:35:40 np0005634532 systemd[1]: Finished Load Kernel Modules.
Mar  1 04:35:41 np0005634532 python3.9[39621]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:35:41 np0005634532 python3.9[39745]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772357740.803723-1087-117433995292078/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:35:42 np0005634532 python3.9[39898]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:35:45 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:35:46 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:35:46 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:35:46 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:35:46 np0005634532 systemd[1]: Reloading.
Mar  1 04:35:46 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:35:46 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:35:49 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:35:49 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:35:49 np0005634532 systemd[1]: man-db-cache-update.service: Consumed 4.479s CPU time.
Mar  1 04:35:49 np0005634532 systemd[1]: run-r58a05b2178244a71a02e9faaf61e79cb.service: Deactivated successfully.
Mar  1 04:35:50 np0005634532 python3.9[43712]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:35:50 np0005634532 python3.9[43866]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Mar  1 04:35:51 np0005634532 python3.9[44016]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:35:52 np0005634532 python3.9[44169]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:35:52 np0005634532 systemd[1]: Starting Dynamic System Tuning Daemon...
Mar  1 04:35:52 np0005634532 systemd[1]: Starting Authorization Manager...
Mar  1 04:35:52 np0005634532 systemd[1]: Started Dynamic System Tuning Daemon.
Mar  1 04:35:52 np0005634532 polkitd[44386]: Started polkitd version 0.117
Mar  1 04:35:52 np0005634532 systemd[1]: Started Authorization Manager.
Mar  1 04:35:53 np0005634532 python3.9[44557]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:35:53 np0005634532 systemd[1]: Stopping Dynamic System Tuning Daemon...
Mar  1 04:35:53 np0005634532 systemd[1]: tuned.service: Deactivated successfully.
Mar  1 04:35:53 np0005634532 systemd[1]: Stopped Dynamic System Tuning Daemon.
Mar  1 04:35:53 np0005634532 systemd[1]: Starting Dynamic System Tuning Daemon...
Mar  1 04:35:54 np0005634532 systemd[1]: Started Dynamic System Tuning Daemon.
Mar  1 04:35:54 np0005634532 python3.9[44719]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Mar  1 04:35:58 np0005634532 python3.9[44872]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:35:58 np0005634532 systemd[1]: Reloading.
Mar  1 04:35:58 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:35:59 np0005634532 python3.9[45069]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:35:59 np0005634532 systemd[1]: Reloading.
Mar  1 04:35:59 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:36:00 np0005634532 python3.9[45266]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:36:00 np0005634532 python3.9[45420]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:36:00 np0005634532 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Mar  1 04:36:01 np0005634532 python3.9[45574]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:36:03 np0005634532 python3.9[45737]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:36:04 np0005634532 python3.9[45891]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:36:04 np0005634532 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Mar  1 04:36:04 np0005634532 systemd[1]: Stopped Apply Kernel Variables.
Mar  1 04:36:04 np0005634532 systemd[1]: Stopping Apply Kernel Variables...
Mar  1 04:36:04 np0005634532 systemd[1]: Starting Apply Kernel Variables...
Mar  1 04:36:04 np0005634532 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Mar  1 04:36:04 np0005634532 systemd[1]: Finished Apply Kernel Variables.
Mar  1 04:36:04 np0005634532 systemd[1]: session-9.scope: Deactivated successfully.
Mar  1 04:36:04 np0005634532 systemd[1]: session-9.scope: Consumed 2min 6.218s CPU time.
Mar  1 04:36:04 np0005634532 systemd-logind[832]: Session 9 logged out. Waiting for processes to exit.
Mar  1 04:36:04 np0005634532 systemd-logind[832]: Removed session 9.
Mar  1 04:36:10 np0005634532 systemd-logind[832]: New session 10 of user zuul.
Mar  1 04:36:10 np0005634532 systemd[1]: Started Session 10 of User zuul.
Mar  1 04:36:11 np0005634532 python3.9[46078]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:36:12 np0005634532 python3.9[46235]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Mar  1 04:36:13 np0005634532 python3.9[46389]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:36:14 np0005634532 python3.9[46548]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Mar  1 04:36:15 np0005634532 python3.9[46710]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:36:16 np0005634532 python3.9[46795]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Mar  1 04:36:22 np0005634532 python3.9[46960]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:36:34 np0005634532 kernel: SELinux:  Converting 2738 SID table entries...
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:36:34 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:36:34 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Mar  1 04:36:34 np0005634532 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Mar  1 04:36:35 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:36:35 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:36:36 np0005634532 systemd[1]: Reloading.
Mar  1 04:36:36 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:36:36 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:36:36 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:36:36 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:36:36 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:36:36 np0005634532 systemd[1]: run-rca56130f05a148eebae97b75e195fa23.service: Deactivated successfully.
Mar  1 04:36:37 np0005634532 python3.9[48095]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:36:37 np0005634532 systemd[1]: Reloading.
Mar  1 04:36:37 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:36:37 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:36:37 np0005634532 systemd[1]: Starting Open vSwitch Database Unit...
Mar  1 04:36:37 np0005634532 chown[48143]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Mar  1 04:36:38 np0005634532 ovs-ctl[48148]: /etc/openvswitch/conf.db does not exist ... (warning).
Mar  1 04:36:38 np0005634532 ovs-ctl[48148]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Mar  1 04:36:38 np0005634532 ovs-ctl[48148]: Starting ovsdb-server [  OK  ]
Mar  1 04:36:38 np0005634532 ovs-vsctl[48197]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Mar  1 04:36:38 np0005634532 ovs-vsctl[48217]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"90b7dc66-b984-4d8b-9541-ddde79c5f544\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Mar  1 04:36:38 np0005634532 ovs-ctl[48148]: Configuring Open vSwitch system IDs [  OK  ]
Mar  1 04:36:38 np0005634532 ovs-vsctl[48223]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Mar  1 04:36:38 np0005634532 ovs-ctl[48148]: Enabling remote OVSDB managers [  OK  ]
Mar  1 04:36:38 np0005634532 systemd[1]: Started Open vSwitch Database Unit.
Mar  1 04:36:38 np0005634532 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Mar  1 04:36:38 np0005634532 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Mar  1 04:36:38 np0005634532 systemd[1]: Starting Open vSwitch Forwarding Unit...
Mar  1 04:36:38 np0005634532 kernel: openvswitch: Open vSwitch switching datapath
Mar  1 04:36:38 np0005634532 ovs-ctl[48268]: Inserting openvswitch module [  OK  ]
Mar  1 04:36:38 np0005634532 ovs-ctl[48237]: Starting ovs-vswitchd [  OK  ]
Mar  1 04:36:38 np0005634532 ovs-ctl[48237]: Enabling remote OVSDB managers [  OK  ]
Mar  1 04:36:38 np0005634532 systemd[1]: Started Open vSwitch Forwarding Unit.
Mar  1 04:36:38 np0005634532 ovs-vsctl[48285]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Mar  1 04:36:38 np0005634532 systemd[1]: Starting Open vSwitch...
Mar  1 04:36:38 np0005634532 systemd[1]: Finished Open vSwitch.
Mar  1 04:36:39 np0005634532 python3.9[48439]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:36:41 np0005634532 python3.9[48592]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Mar  1 04:36:42 np0005634532 kernel: SELinux:  Converting 2752 SID table entries...
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:36:42 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:36:44 np0005634532 python3.9[48747]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:36:46 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Mar  1 04:36:46 np0005634532 python3.9[48906]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:36:48 np0005634532 python3.9[49062]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:36:50 np0005634532 python3.9[49354]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Mar  1 04:36:50 np0005634532 python3.9[49504]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:36:51 np0005634532 python3.9[49659]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:36:53 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:36:53 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:36:53 np0005634532 systemd[1]: Reloading.
Mar  1 04:36:53 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:36:53 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:36:53 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:36:53 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:36:53 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:36:53 np0005634532 systemd[1]: run-r10336f1bae384e8b92542f1b5401d333.service: Deactivated successfully.
Mar  1 04:36:54 np0005634532 python3.9[49984]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:36:54 np0005634532 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Mar  1 04:36:54 np0005634532 systemd[1]: Stopped Network Manager Wait Online.
Mar  1 04:36:54 np0005634532 systemd[1]: Stopping Network Manager Wait Online...
Mar  1 04:36:54 np0005634532 systemd[1]: Stopping Network Manager...
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5308] caught SIGTERM, shutting down normally.
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5323] dhcp4 (eth0): canceled DHCP transaction
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5323] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5323] dhcp4 (eth0): state changed no lease
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5325] manager: NetworkManager state is now CONNECTED_SITE
Mar  1 04:36:54 np0005634532 NetworkManager[7709]: <info>  [1772357814.5377] exiting (success)
Mar  1 04:36:54 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:36:54 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:36:54 np0005634532 systemd[1]: NetworkManager.service: Deactivated successfully.
Mar  1 04:36:54 np0005634532 systemd[1]: Stopped Network Manager.
Mar  1 04:36:54 np0005634532 systemd[1]: NetworkManager.service: Consumed 12.363s CPU time, 4.4M memory peak, read 0B from disk, written 37.0K to disk.
Mar  1 04:36:54 np0005634532 systemd[1]: Starting Network Manager...
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.5850] NetworkManager (version 1.54.3-2.el9) is starting... (after a restart, boot:67233403-8d31-4a6b-a6aa-c5d04326d053)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.5851] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.5895] manager[0x562c406f3000]: monitoring kernel firmware directory '/lib/firmware'.
Mar  1 04:36:54 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 04:36:54 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6577] hostname: hostname: using hostnamed
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6577] hostname: static hostname changed from (none) to "compute-0"
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6586] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6591] manager[0x562c406f3000]: rfkill: Wi-Fi hardware radio set enabled
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6591] manager[0x562c406f3000]: rfkill: WWAN hardware radio set enabled
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6611] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-ovs.so)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6620] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-device-plugin-team.so)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6621] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6621] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6622] manager: Networking is enabled by state file
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6624] settings: Loaded settings plugin: keyfile (internal)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6628] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.3-2.el9/libnm-settings-plugin-ifcfg-rh.so")
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6656] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6664] dhcp: init: Using DHCP client 'internal'
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6666] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6670] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6674] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6680] device (lo): Activation: starting connection 'lo' (c3703ce3-f4b8-446d-9fc7-2e82b0ccaf00)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6685] device (eth0): carrier: link connected
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6688] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6692] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6692] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6697] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6702] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6707] device (eth1): carrier: link connected
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6711] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6716] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (df70c8b1-de1e-586c-a971-ac86ce783505) (indicated)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6716] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6722] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6729] device (eth1): Activation: starting connection 'ci-private-network' (df70c8b1-de1e-586c-a971-ac86ce783505)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6734] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Mar  1 04:36:54 np0005634532 systemd[1]: Started Network Manager.
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6748] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6750] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6753] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6755] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6758] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6761] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6762] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6766] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6775] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6787] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6800] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6809] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6811] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6815] device (lo): Activation: successful, device activated.
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6823] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6830] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6891] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6895] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6900] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6903] manager: NetworkManager state is now CONNECTED_LOCAL
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6904] device (eth1): Activation: successful, device activated.
Mar  1 04:36:54 np0005634532 systemd[1]: Starting Network Manager Wait Online...
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6913] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6914] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6916] manager: NetworkManager state is now CONNECTED_SITE
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6917] device (eth0): Activation: successful, device activated.
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6920] manager: NetworkManager state is now CONNECTED_GLOBAL
Mar  1 04:36:54 np0005634532 NetworkManager[49996]: <info>  [1772357814.6922] manager: startup complete
Mar  1 04:36:54 np0005634532 systemd[1]: Finished Network Manager Wait Online.
Mar  1 04:36:55 np0005634532 python3.9[50213]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:37:00 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:37:00 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:37:00 np0005634532 systemd[1]: Reloading.
Mar  1 04:37:00 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:37:00 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:37:00 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:37:01 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:37:01 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:37:01 np0005634532 systemd[1]: run-rc159a9f99bd0487096dd8d4a528dfeaf.service: Deactivated successfully.
Mar  1 04:37:02 np0005634532 python3.9[50696]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:37:03 np0005634532 python3.9[50849]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:04 np0005634532 python3.9[51004]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:04 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:37:05 np0005634532 python3.9[51157]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:05 np0005634532 python3.9[51310]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:06 np0005634532 python3.9[51463]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:06 np0005634532 python3.9[51616]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:37:07 np0005634532 python3.9[51740]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357826.5221028-648-22267193685518/.source _original_basename=._fndrwyu follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:08 np0005634532 python3.9[51893]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:09 np0005634532 python3.9[52046]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Mar  1 04:37:09 np0005634532 python3.9[52199]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:11 np0005634532 python3.9[52629]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Mar  1 04:37:13 np0005634532 ansible-async_wrapper.py[52805]: Invoked with j849716871305 300 /home/zuul/.ansible/tmp/ansible-tmp-1772357832.2170868-846-176564909867202/AnsiballZ_edpm_os_net_config.py _
Mar  1 04:37:13 np0005634532 ansible-async_wrapper.py[52808]: Starting module and watcher
Mar  1 04:37:13 np0005634532 ansible-async_wrapper.py[52808]: Start watching 52809 (300)
Mar  1 04:37:13 np0005634532 ansible-async_wrapper.py[52809]: Start module (52809)
Mar  1 04:37:13 np0005634532 ansible-async_wrapper.py[52805]: Return async_wrapper task started.
Mar  1 04:37:13 np0005634532 python3.9[52810]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True remove_config=False safe_defaults=False use_nmstate=True purge_provider=
Mar  1 04:37:13 np0005634532 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Mar  1 04:37:13 np0005634532 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Mar  1 04:37:13 np0005634532 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Mar  1 04:37:13 np0005634532 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Mar  1 04:37:13 np0005634532 kernel: cfg80211: failed to load regulatory.db
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1317] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1334] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1892] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1894] audit: op="connection-add" uuid="9e5a6966-e499-4366-a066-ccf0f551ae97" name="br-ex-br" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1909] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1911] audit: op="connection-add" uuid="e124dc4a-c201-42f5-a0d8-5ca3dff44a88" name="br-ex-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1924] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1926] audit: op="connection-add" uuid="b31b9fe3-609d-4838-9aee-9ba51d6484bb" name="eth1-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1939] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1940] audit: op="connection-add" uuid="53ddb72d-d5ee-4ec2-9eb1-0232558cb27b" name="vlan20-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1953] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1955] audit: op="connection-add" uuid="60af7959-6731-48d2-b4c5-e3a80e117370" name="vlan21-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1969] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1970] audit: op="connection-add" uuid="fbe77d9f-a00e-4be4-bc4b-ea134a3a8883" name="vlan22-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1983] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.1985] audit: op="connection-add" uuid="348eb108-d6c9-492c-b9f0-5397c9aaa321" name="vlan23-port" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2006] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,connection.timestamp,802-3-ethernet.mtu" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2023] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2026] audit: op="connection-add" uuid="c27f32c2-5ea3-40c2-8741-689791669e54" name="br-ex-if" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2064] audit: op="connection-update" uuid="df70c8b1-de1e-586c-a971-ac86ce783505" name="ci-private-network" args="ipv6.routes,ipv6.routing-rules,ipv6.dns,ipv6.addr-gen-mode,ipv6.method,ipv6.addresses,ipv4.routes,ipv4.routing-rules,ipv4.dns,ipv4.method,ipv4.never-default,ipv4.addresses,connection.master,connection.timestamp,connection.slave-type,connection.controller,connection.port-type,ovs-interface.type,ovs-external-ids.data" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2081] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2083] audit: op="connection-add" uuid="830e3721-a6dd-4522-9aa8-dd08816573a2" name="vlan20-if" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2100] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2102] audit: op="connection-add" uuid="0c1d5dc5-7190-4afa-a253-533aea0854e1" name="vlan21-if" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2118] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2121] audit: op="connection-add" uuid="2bbcf9af-746d-4568-9a69-9b5d35a38394" name="vlan22-if" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2139] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2143] audit: op="connection-add" uuid="41b6be50-b86c-40ac-a6ec-71665f550df7" name="vlan23-if" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2154] audit: op="connection-delete" uuid="fcd1fd55-dd6d-3098-b03a-e2e1ca621882" name="Wired connection 1" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2169] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2173] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2182] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2187] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (9e5a6966-e499-4366-a066-ccf0f551ae97)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2189] audit: op="connection-activate" uuid="9e5a6966-e499-4366-a066-ccf0f551ae97" name="br-ex-br" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2192] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2193] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2201] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2207] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (e124dc4a-c201-42f5-a0d8-5ca3dff44a88)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2210] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2211] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2217] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2223] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (b31b9fe3-609d-4838-9aee-9ba51d6484bb)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2225] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2227] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2233] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2239] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (53ddb72d-d5ee-4ec2-9eb1-0232558cb27b)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2242] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2243] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2250] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2255] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (60af7959-6731-48d2-b4c5-e3a80e117370)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2258] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2259] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2266] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2271] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (fbe77d9f-a00e-4be4-bc4b-ea134a3a8883)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2275] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2277] device (vlan23)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2285] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2292] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (348eb108-d6c9-492c-b9f0-5397c9aaa321)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2293] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2296] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2299] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2308] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2309] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2314] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2321] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (c27f32c2-5ea3-40c2-8741-689791669e54)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2323] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2328] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2331] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2333] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2335] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2349] device (eth1): disconnecting for new activation request.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2350] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2354] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2356] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2357] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2361] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2362] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2366] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2373] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (830e3721-a6dd-4522-9aa8-dd08816573a2)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2374] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2377] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2381] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2382] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2386] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2387] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2392] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2399] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (0c1d5dc5-7190-4afa-a253-533aea0854e1)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2400] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2405] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2407] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2409] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2414] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2415] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2420] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2427] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (2bbcf9af-746d-4568-9a69-9b5d35a38394)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2427] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2432] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2435] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2436] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2442] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <warn>  [1772357835.2443] device (vlan23)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2448] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2454] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (41b6be50-b86c-40ac-a6ec-71665f550df7)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2455] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2461] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2463] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2465] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2468] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2483] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-client-id,ipv4.dhcp-timeout,connection.autoconnect-priority,802-3-ethernet.mtu" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2485] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2490] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2492] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2499] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2504] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2509] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2515] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 kernel: ovs-system: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2517] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2523] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2528] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2533] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2536] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2545] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 kernel: Timeout policy base is empty
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2551] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2557] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2560] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 systemd-udevd[52817]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2567] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2575] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2580] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2583] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2589] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2596] dhcp4 (eth0): canceled DHCP transaction
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2596] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2596] dhcp4 (eth0): state changed no lease
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2598] dhcp4 (eth0): activation: beginning transaction (no timeout)
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2612] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 systemd[1]: Starting Network Manager Script Dispatcher Service...
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2625] audit: op="device-reapply" interface="eth1" ifindex=3 pid=52811 uid=0 result="fail" reason="Device is not activated"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2700] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2705] dhcp4 (eth0): state changed new lease, address=38.102.83.94
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2711] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2760] device (eth1): disconnecting for new activation request.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2762] audit: op="connection-activate" uuid="df70c8b1-de1e-586c-a971-ac86ce783505" name="ci-private-network" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2764] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2774] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 systemd[1]: Started Network Manager Script Dispatcher Service.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2798] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Mar  1 04:37:15 np0005634532 kernel: br-ex: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2937] device (eth1): Activation: starting connection 'ci-private-network' (df70c8b1-de1e-586c-a971-ac86ce783505)
Mar  1 04:37:15 np0005634532 kernel: vlan22: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2945] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2947] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52811 uid=0 result="success"
Mar  1 04:37:15 np0005634532 systemd-udevd[52815]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2959] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2963] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2972] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2976] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 kernel: vlan21: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2988] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2990] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2992] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2993] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2994] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.2995] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3000] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3007] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 systemd-udevd[52816]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3010] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3013] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3016] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3019] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3021] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3023] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3026] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3045] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3048] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3051] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3054] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3066] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Mar  1 04:37:15 np0005634532 kernel: vlan20: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3083] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3093] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3100] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Mar  1 04:37:15 np0005634532 systemd-udevd[52915]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3124] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3149] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 kernel: vlan23: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3163] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Mar  1 04:37:15 np0005634532 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3195] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3197] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3197] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3203] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3206] device (eth1): Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3213] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3217] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3223] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3226] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3239] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3261] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3276] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3327] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3335] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3338] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3347] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3354] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3363] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3368] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3391] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3434] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3436] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Mar  1 04:37:15 np0005634532 NetworkManager[49996]: <info>  [1772357835.3442] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Mar  1 04:37:16 np0005634532 NetworkManager[49996]: <info>  [1772357836.4679] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52811 uid=0 result="success"
Mar  1 04:37:16 np0005634532 NetworkManager[49996]: <info>  [1772357836.6546] checkpoint[0x562c406c9950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Mar  1 04:37:16 np0005634532 NetworkManager[49996]: <info>  [1772357836.6549] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=52811 uid=0 result="success"
Mar  1 04:37:16 np0005634532 python3.9[53170]: ansible-ansible.legacy.async_status Invoked with jid=j849716871305.52805 mode=status _async_dir=/root/.ansible_async
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.0286] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.0303] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.2647] audit: op="networking-control" arg="global-dns-configuration" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.2674] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.2703] audit: op="networking-control" arg="global-dns-configuration" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.2735] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.4233] checkpoint[0x562c406c9a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Mar  1 04:37:17 np0005634532 NetworkManager[49996]: <info>  [1772357837.4240] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=52811 uid=0 result="success"
Mar  1 04:37:17 np0005634532 ansible-async_wrapper.py[52809]: Module complete (52809)
Mar  1 04:37:18 np0005634532 ansible-async_wrapper.py[52808]: Done in kid B.
Mar  1 04:37:20 np0005634532 python3.9[53277]: ansible-ansible.legacy.async_status Invoked with jid=j849716871305.52805 mode=status _async_dir=/root/.ansible_async
Mar  1 04:37:20 np0005634532 python3.9[53378]: ansible-ansible.legacy.async_status Invoked with jid=j849716871305.52805 mode=cleanup _async_dir=/root/.ansible_async
Mar  1 04:37:21 np0005634532 python3.9[53531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:37:21 np0005634532 python3.9[53655]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357840.996213-927-134121464844315/.source.returncode _original_basename=.wc_dd6bv follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:22 np0005634532 python3.9[53808]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:37:23 np0005634532 python3.9[53932]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357842.311074-975-245274324784539/.source.cfg _original_basename=.5ghga184 follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:24 np0005634532 python3.9[54086]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:37:24 np0005634532 systemd[1]: Reloading Network Manager...
Mar  1 04:37:24 np0005634532 NetworkManager[49996]: <info>  [1772357844.2721] audit: op="reload" arg="0" pid=54090 uid=0 result="success"
Mar  1 04:37:24 np0005634532 NetworkManager[49996]: <info>  [1772357844.2728] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Mar  1 04:37:24 np0005634532 systemd[1]: Reloaded Network Manager.
Mar  1 04:37:24 np0005634532 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar  1 04:37:24 np0005634532 systemd[1]: session-10.scope: Deactivated successfully.
Mar  1 04:37:24 np0005634532 systemd[1]: session-10.scope: Consumed 46.506s CPU time.
Mar  1 04:37:24 np0005634532 systemd-logind[832]: Session 10 logged out. Waiting for processes to exit.
Mar  1 04:37:24 np0005634532 systemd-logind[832]: Removed session 10.
Mar  1 04:37:29 np0005634532 systemd-logind[832]: New session 11 of user zuul.
Mar  1 04:37:29 np0005634532 systemd[1]: Started Session 11 of User zuul.
Mar  1 04:37:30 np0005634532 python3.9[54278]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:37:31 np0005634532 python3.9[54434]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:37:32 np0005634532 python3.9[54627]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:37:33 np0005634532 systemd[1]: session-11.scope: Deactivated successfully.
Mar  1 04:37:33 np0005634532 systemd[1]: session-11.scope: Consumed 2.154s CPU time.
Mar  1 04:37:33 np0005634532 systemd-logind[832]: Session 11 logged out. Waiting for processes to exit.
Mar  1 04:37:33 np0005634532 systemd-logind[832]: Removed session 11.
Mar  1 04:37:34 np0005634532 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Mar  1 04:37:38 np0005634532 systemd-logind[832]: New session 12 of user zuul.
Mar  1 04:37:38 np0005634532 systemd[1]: Started Session 12 of User zuul.
Mar  1 04:37:39 np0005634532 python3.9[54811]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:37:40 np0005634532 python3.9[54966]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:37:41 np0005634532 python3.9[55123]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:37:42 np0005634532 python3.9[55208]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:37:44 np0005634532 python3.9[55365]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:37:45 np0005634532 python3.9[55561]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:46 np0005634532 python3.9[55714]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:37:46 np0005634532 podman[55716]: 2026-03-01 09:37:46.156309999 +0000 UTC m=+0.065277583 system refresh
Mar  1 04:37:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:37:47 np0005634532 python3.9[55879]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:37:47 np0005634532 python3.9[56003]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357866.598604-192-59352800144414/.source.json follow=False _original_basename=podman_network_config.j2 checksum=4a4729a4367995140e526790e844cd294f482d7c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:37:48 np0005634532 python3.9[56156]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:37:48 np0005634532 python3.9[56280]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772357867.9890323-237-33828549941817/.source.conf follow=False _original_basename=registries.conf.j2 checksum=804a0d01b832e60d20f779a331306df708c87b02 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:37:49 np0005634532 python3.9[56433]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:37:50 np0005634532 python3.9[56586]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:37:50 np0005634532 python3.9[56739]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:37:51 np0005634532 python3.9[56892]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:37:52 np0005634532 python3.9[57045]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:37:54 np0005634532 python3.9[57199]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:37:55 np0005634532 python3.9[57354]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:37:55 np0005634532 python3.9[57507]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:37:56 np0005634532 python3.9[57660]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:37:57 np0005634532 python3.9[57814]: ansible-service_facts Invoked
Mar  1 04:37:57 np0005634532 network[57831]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:37:57 np0005634532 network[57832]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:37:57 np0005634532 network[57833]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:38:02 np0005634532 python3.9[58290]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:38:04 np0005634532 python3.9[58444]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Mar  1 04:38:05 np0005634532 python3.9[58597]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:06 np0005634532 python3.9[58723]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357885.5290725-669-82928759334479/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:07 np0005634532 python3.9[58878]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:07 np0005634532 python3.9[59004]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357886.8080537-714-68751929735356/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:09 np0005634532 python3.9[59159]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:11 np0005634532 python3.9[59314]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:38:12 np0005634532 python3.9[59399]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:13 np0005634532 python3.9[59554]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:38:14 np0005634532 python3.9[59639]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:38:14 np0005634532 systemd[1]: Stopping NTP client/server...
Mar  1 04:38:14 np0005634532 chronyd[841]: chronyd exiting
Mar  1 04:38:14 np0005634532 systemd[1]: chronyd.service: Deactivated successfully.
Mar  1 04:38:14 np0005634532 systemd[1]: Stopped NTP client/server.
Mar  1 04:38:14 np0005634532 systemd[1]: Starting NTP client/server...
Mar  1 04:38:14 np0005634532 chronyd[59647]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Mar  1 04:38:14 np0005634532 chronyd[59647]: Frequency -24.777 +/- 0.227 ppm read from /var/lib/chrony/drift
Mar  1 04:38:14 np0005634532 chronyd[59647]: Loaded seccomp filter (level 2)
Mar  1 04:38:14 np0005634532 systemd[1]: Started NTP client/server.
Mar  1 04:38:15 np0005634532 systemd[1]: session-12.scope: Deactivated successfully.
Mar  1 04:38:15 np0005634532 systemd[1]: session-12.scope: Consumed 22.844s CPU time.
Mar  1 04:38:15 np0005634532 systemd-logind[832]: Session 12 logged out. Waiting for processes to exit.
Mar  1 04:38:15 np0005634532 systemd-logind[832]: Removed session 12.
Mar  1 04:38:20 np0005634532 systemd-logind[832]: New session 13 of user zuul.
Mar  1 04:38:20 np0005634532 systemd[1]: Started Session 13 of User zuul.
Mar  1 04:38:21 np0005634532 python3.9[59832]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:21 np0005634532 python3.9[59985]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:22 np0005634532 python3.9[60109]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357901.2550771-57-1374358328500/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:22 np0005634532 systemd[1]: session-13.scope: Deactivated successfully.
Mar  1 04:38:22 np0005634532 systemd[1]: session-13.scope: Consumed 1.486s CPU time.
Mar  1 04:38:22 np0005634532 systemd-logind[832]: Session 13 logged out. Waiting for processes to exit.
Mar  1 04:38:22 np0005634532 systemd-logind[832]: Removed session 13.
Mar  1 04:38:28 np0005634532 systemd-logind[832]: New session 14 of user zuul.
Mar  1 04:38:28 np0005634532 systemd[1]: Started Session 14 of User zuul.
Mar  1 04:38:29 np0005634532 python3.9[60291]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:38:30 np0005634532 python3.9[60450]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:30 np0005634532 python3.9[60626]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:31 np0005634532 python3.9[60750]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1772357910.3384318-78-90205552993034/.source.json _original_basename=.pl0itm4m follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:32 np0005634532 python3.9[60903]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:33 np0005634532 python3.9[61027]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357911.9793081-147-153215417721004/.source _original_basename=.90oytrp6 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:33 np0005634532 python3.9[61180]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:38:34 np0005634532 python3.9[61333]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:35 np0005634532 python3.9[61457]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772357913.9396582-219-28765589403024/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:38:35 np0005634532 python3.9[61610]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:35 np0005634532 python3.9[61734]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772357915.145134-219-69066456600828/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:38:36 np0005634532 python3.9[61887]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:37 np0005634532 python3.9[62040]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:38 np0005634532 python3.9[62164]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357917.075411-330-95132840558374/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:38 np0005634532 python3.9[62317]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:39 np0005634532 python3.9[62441]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357918.3284533-375-255948573183758/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:40 np0005634532 python3.9[62594]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:40 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:40 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:40 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:40 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:40 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:40 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:41 np0005634532 systemd[1]: Starting EDPM Container Shutdown...
Mar  1 04:38:41 np0005634532 systemd[1]: Finished EDPM Container Shutdown.
Mar  1 04:38:41 np0005634532 python3.9[62835]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:42 np0005634532 python3.9[62961]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357921.3788018-444-269507062505536/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:43 np0005634532 python3.9[63114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:43 np0005634532 python3.9[63238]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357922.7203588-489-205400933589494/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:44 np0005634532 python3.9[63391]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:44 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:44 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:44 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:44 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:44 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:44 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:45 np0005634532 systemd[1]: Starting Create netns directory...
Mar  1 04:38:45 np0005634532 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Mar  1 04:38:45 np0005634532 systemd[1]: netns-placeholder.service: Deactivated successfully.
Mar  1 04:38:45 np0005634532 systemd[1]: Finished Create netns directory.
Mar  1 04:38:46 np0005634532 python3.9[63630]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:38:46 np0005634532 network[63647]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:38:46 np0005634532 network[63648]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:38:46 np0005634532 network[63649]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:38:49 np0005634532 python3.9[63913]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:49 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:49 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:49 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:49 np0005634532 systemd[1]: Stopping IPv4 firewall with iptables...
Mar  1 04:38:50 np0005634532 iptables.init[63959]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Mar  1 04:38:50 np0005634532 iptables.init[63959]: iptables: Flushing firewall rules: [  OK  ]
Mar  1 04:38:50 np0005634532 systemd[1]: iptables.service: Deactivated successfully.
Mar  1 04:38:50 np0005634532 systemd[1]: Stopped IPv4 firewall with iptables.
Mar  1 04:38:51 np0005634532 python3.9[64156]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:51 np0005634532 python3.9[64311]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:38:51 np0005634532 systemd[1]: Reloading.
Mar  1 04:38:51 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:38:52 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:38:52 np0005634532 systemd[1]: Starting Netfilter Tables...
Mar  1 04:38:52 np0005634532 systemd[1]: Finished Netfilter Tables.
Mar  1 04:38:53 np0005634532 python3.9[64512]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:38:54 np0005634532 python3.9[64666]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:54 np0005634532 python3.9[64792]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357933.7054348-696-13176981407368/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:55 np0005634532 python3.9[64946]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:38:55 np0005634532 systemd[1]: Reloading OpenSSH server daemon...
Mar  1 04:38:55 np0005634532 systemd[1]: Reloaded OpenSSH server daemon.
Mar  1 04:38:56 np0005634532 python3.9[65103]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:56 np0005634532 python3.9[65257]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:38:57 np0005634532 python3.9[65381]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357936.4582062-789-251831623289809/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:38:58 np0005634532 python3.9[65534]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Mar  1 04:38:58 np0005634532 systemd[1]: Starting Time & Date Service...
Mar  1 04:38:58 np0005634532 systemd[1]: Started Time & Date Service.
Mar  1 04:38:59 np0005634532 python3.9[65691]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:00 np0005634532 python3.9[65844]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:00 np0005634532 python3.9[65968]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357939.671397-894-39542706150621/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:01 np0005634532 python3.9[66121]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:01 np0005634532 python3.9[66245]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772357940.8330617-939-63254327623424/.source.yaml _original_basename=.08ml25ji follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:02 np0005634532 python3.9[66398]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:02 np0005634532 python3.9[66522]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357942.0229745-984-69813335437163/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:03 np0005634532 python3.9[66675]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:05 np0005634532 python3.9[66829]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:06 np0005634532 python3[66983]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Mar  1 04:39:07 np0005634532 python3.9[67136]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:07 np0005634532 python3.9[67260]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357946.7183375-1101-200459058285111/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:08 np0005634532 python3.9[67413]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:08 np0005634532 python3.9[67537]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357948.0032258-1146-55596972601377/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:09 np0005634532 python3.9[67690]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:10 np0005634532 python3.9[67814]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357949.2270162-1191-260007755615722/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:10 np0005634532 python3.9[67967]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:11 np0005634532 python3.9[68091]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357950.4142835-1236-241726244176755/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:12 np0005634532 python3.9[68246]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:39:12 np0005634532 python3.9[68370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772357951.5654025-1281-237722735781201/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:13 np0005634532 python3.9[68523]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:13 np0005634532 python3.9[68678]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:14 np0005634532 python3.9[68840]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:15 np0005634532 python3.9[68994]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:16 np0005634532 python3.9[69147]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:17 np0005634532 python3.9[69302]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Mar  1 04:39:17 np0005634532 python3.9[69456]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Mar  1 04:39:18 np0005634532 systemd[1]: session-14.scope: Deactivated successfully.
Mar  1 04:39:18 np0005634532 systemd[1]: session-14.scope: Consumed 32.839s CPU time.
Mar  1 04:39:18 np0005634532 systemd-logind[832]: Session 14 logged out. Waiting for processes to exit.
Mar  1 04:39:18 np0005634532 systemd-logind[832]: Removed session 14.
Mar  1 04:39:23 np0005634532 systemd-logind[832]: New session 15 of user zuul.
Mar  1 04:39:23 np0005634532 systemd[1]: Started Session 15 of User zuul.
Mar  1 04:39:24 np0005634532 python3.9[69640]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Mar  1 04:39:25 np0005634532 python3.9[69793]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:39:26 np0005634532 python3.9[69946]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:39:26 np0005634532 python3.9[70099]: ansible-ansible.builtin.blockinfile Invoked with block=compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOJ164QLG+v0QCrqtWeikJkAAqlq5h4ZiMVzeVdQOaEkqQhYc/2VA3QSCqAFSmG7JdVFkN/LVtEx5g4NHEpVpppXz8yVYkQPsBR2XYD6WZsiWbaOefR+spqTVxOhMg/I/q5rJ4u1gDDVVc+UK/d99tOEugxHzXGIDFeH5NkXCD1ZYOPISetLGdcqRWIkasLnAEpx/FT+ObN03Tglla0WDb+62BSR0zaPhy8lLS6Q57KfiGZpmDQsbzlXJjAorS1T4XKzyvcDnMeQubWI7IL4YWasSie9Xag/ejOje77NietOgR7VJ/6VTXUTo6m+DvTVsibFfdrpb+a2Lf7YUqcAia2r5ukcmhbckf90+2bvBg0s+w6TJQp2CfsMPbiu9XQAZ1jlkej2GQm5DinOrKxpfv7pi05w4ngJd+IltL7rwIp9SQNh9ywSmua6xPMSMQZgwjK4N1j6ztvQNNppNC47CSv5a3lVSO8A4KnjyW9SSxQm0byALScVy0w5nqFXnta68=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOUemQYd3c9hI8oDWBIKmGQ5QqvNjLewcRYP6Hv4PK7N#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJME/3XoZEzUtru+DrmzuJsB9ikDG73pBPEngHZK244wqqblcgz9hmV+MIHN8QeqtxjaJFT4WxbJGxe84tZ/wGU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU08zL9v37ramJn0dw/GX+LnniiZDLVr1ufHB/vIoDMP6yH59FLpNVDjEXrrtecyTkPLwKua9rkeH+LqTXMiB0tr9HYoJOEb348Hyr67QWxzKTdjGFGgEqL0L3VPIjZyqza4c/Idsc23VWcoG2BVjC2P1FvakwqeAGDyD3k9CO4JDwxUZk06JC0RBFaU2R2iQ8B3MpTcuymIJj64xDxFYOChy5pBE+Uhx7TYHKeTsgYvBNYsV8TF4h8RtzMxr4uyTRqlyj/AhdZZRDli02ht2fN9xBLfKoGujAuk0NAUXmUT30qYbWjLdFiLOrwS+9Yk9N/YsXZG7sz14WjxJIarNBUwsfSccZx3STdibyj6N9EOTVjTl1FZEM9XR2DybfWf+gPuXgSYOKGsVN90ATw5TnzAO7AMalEjigAv92Mpvas4SGgqfj/0MQ16DqrdjTF4lcOPC4PWrdyT8oJHM/FoUPA0tv+jdeJxqyAWt7SOCBmACsMG8zQwc5prDRdvaQNdE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPd7MfC4EfSXNqkx3ZGh1BTjDNatkRaRNQxJVpfOH7en#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKygj9MIB5PtB8xqboDNvY9c1sHw6GGIMIlKuX4Vf22zEAE/0z3WEsO6MS3bJQKJ0YbOlDB8FirRHsR4xyD5G6g=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBmOUMfQBRkdlCmLdVTvO1yRyjEcD4t0OKIEpbh4ryAc2GWzRMRNDLU60+739+UcQsrdVyxTiXF+1b4R27n+WM5m+PSUaUS47kzM15ltgEEBIwbd8kDwFei103QQ6PPs1fkZNPV4IY5g6yaqTRNygE3+8d4WEAIkBERkGRuKYKK28m/GirDbl7l9VIuQCla39ATTqNIAuB55hGGVkoC+TE5DA0lgQNdUHCvuTNNhYMozVQCbj0TWAW6LGA6TyLOAmowQp6xPhpY9CkvE12YdSx9sF96i6qh8RI/l/w/F0bwaUWLp/Bd4sC5TSiZHeatJnSxjfxf2Z+hi6yyVBiy1zRmyvgrn/40B3pT/sihT/7GEWNaTXopKzOJTOCXF+R1vIjwO6J6u/e6Vk1RG79gX7agHwtKoRVYzed99IaBe2d7JF1rlq6oXPaPpowgr0cdLi25GovhNGA9h8/y/M2MBBk4ls/Pzhjqj+VNmj0sJtLKAdCOWKchWN8lVG4mz/m/R8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL01Ha5eRQ6w0kkkdALy1Rwciw5vN8MWCQgukICidqXU#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/N47g90Bj7eIRfJGkuhjkyR6CMjBlH0FE3oL+RNHXqGcdV4sHpT/3R+7aiSZj+EXGyAG7KQXVmh9UoTuwFT5k=#012 create=True mode=0644 path=/tmp/ansible.se1ffoun state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:27 np0005634532 python3.9[70252]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.se1ffoun' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:28 np0005634532 python3.9[70407]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.se1ffoun state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:28 np0005634532 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar  1 04:39:28 np0005634532 systemd[1]: session-15.scope: Deactivated successfully.
Mar  1 04:39:28 np0005634532 systemd[1]: session-15.scope: Consumed 3.265s CPU time.
Mar  1 04:39:28 np0005634532 systemd-logind[832]: Session 15 logged out. Waiting for processes to exit.
Mar  1 04:39:28 np0005634532 systemd-logind[832]: Removed session 15.
Mar  1 04:39:33 np0005634532 systemd-logind[832]: New session 16 of user zuul.
Mar  1 04:39:33 np0005634532 systemd[1]: Started Session 16 of User zuul.
Mar  1 04:39:34 np0005634532 python3.9[70589]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:39:35 np0005634532 python3.9[70746]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Mar  1 04:39:36 np0005634532 python3.9[70901]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:39:37 np0005634532 python3.9[71055]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:38 np0005634532 python3.9[71209]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:39:38 np0005634532 python3.9[71364]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:39 np0005634532 python3.9[71520]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:39:40 np0005634532 systemd[1]: session-16.scope: Deactivated successfully.
Mar  1 04:39:40 np0005634532 systemd[1]: session-16.scope: Consumed 4.153s CPU time.
Mar  1 04:39:40 np0005634532 systemd-logind[832]: Session 16 logged out. Waiting for processes to exit.
Mar  1 04:39:40 np0005634532 systemd-logind[832]: Removed session 16.
Mar  1 04:39:45 np0005634532 systemd-logind[832]: New session 17 of user zuul.
Mar  1 04:39:45 np0005634532 systemd[1]: Started Session 17 of User zuul.
Mar  1 04:39:46 np0005634532 python3.9[71699]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:39:47 np0005634532 python3.9[71856]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:39:48 np0005634532 python3.9[71941]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Mar  1 04:39:50 np0005634532 python3.9[72094]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:39:52 np0005634532 python3.9[72245]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:39:52 np0005634532 python3.9[72395]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:39:52 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:39:53 np0005634532 python3.9[72546]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/nova follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:39:53 np0005634532 systemd[1]: session-17.scope: Deactivated successfully.
Mar  1 04:39:53 np0005634532 systemd[1]: session-17.scope: Consumed 5.581s CPU time.
Mar  1 04:39:53 np0005634532 systemd-logind[832]: Session 17 logged out. Waiting for processes to exit.
Mar  1 04:39:53 np0005634532 systemd-logind[832]: Removed session 17.
Mar  1 04:40:01 np0005634532 systemd-logind[832]: New session 18 of user zuul.
Mar  1 04:40:01 np0005634532 systemd[1]: Started Session 18 of User zuul.
Mar  1 04:40:06 np0005634532 python3[73318]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:40:07 np0005634532 python3[73352]: ansible-ansible.legacy.dnf Invoked with name=['jq'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Mar  1 04:40:09 np0005634532 python3[73379]: ansible-ansible.legacy.dnf Invoked with name=['centos-release-ceph-squid'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Mar  1 04:40:11 np0005634532 python3[73436]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Mar  1 04:40:14 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:40:14 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:40:14 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:40:14 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:40:14 np0005634532 systemd[1]: run-raa04f431646b4b14a799fe51a74a0c92.service: Deactivated successfully.
Mar  1 04:40:15 np0005634532 python3[73558]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:16 np0005634532 python3[73586]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:40:17 np0005634532 python3[73682]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Mar  1 04:40:19 np0005634532 python3[73709]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:19 np0005634532 python3[73735]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:40:19 np0005634532 kernel: loop: module loaded
Mar  1 04:40:19 np0005634532 kernel: loop3: detected capacity change from 0 to 41943040
Mar  1 04:40:20 np0005634532 python3[73770]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:40:20 np0005634532 lvm[73773]: PV /dev/loop3 not used.
Mar  1 04:40:20 np0005634532 lvm[73775]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:40:20 np0005634532 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Mar  1 04:40:20 np0005634532 lvm[73783]:  1 logical volume(s) in volume group "ceph_vg0" now active
Mar  1 04:40:20 np0005634532 lvm[73785]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:40:20 np0005634532 lvm[73785]: VG ceph_vg0 finished
Mar  1 04:40:20 np0005634532 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Mar  1 04:40:20 np0005634532 python3[73863]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:40:21 np0005634532 python3[73936]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358020.5427895-37582-144471719220333/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:40:21 np0005634532 python3[73986]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:40:21 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:21 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:21 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:22 np0005634532 systemd[1]: Starting Ceph OSD losetup...
Mar  1 04:40:22 np0005634532 bash[74033]: /dev/loop3: [64513]:4329457 (/var/lib/ceph-osd-0.img)
Mar  1 04:40:22 np0005634532 systemd[1]: Finished Ceph OSD losetup.
Mar  1 04:40:22 np0005634532 lvm[74034]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:40:22 np0005634532 lvm[74034]: VG ceph_vg0 finished
Mar  1 04:40:23 np0005634532 chronyd[59647]: Selected source 167.160.187.179 (pool.ntp.org)
Mar  1 04:40:24 np0005634532 python3[74058]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:40:26 np0005634532 python3[74153]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:40:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:27 np0005634532 python3[74218]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:40:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:27 np0005634532 python3[74244]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:40:28 np0005634532 python3[74322]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:40:28 np0005634532 python3[74395]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358027.865291-37763-250496404956285/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=a2c84611a4e46cfce32a90c112eae0345cab6abb backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:40:29 np0005634532 python3[74497]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:40:29 np0005634532 python3[74570]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358028.9278898-37781-152769950645251/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:40:30 np0005634532 python3[74620]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:30 np0005634532 python3[74650]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:30 np0005634532 python3[74678]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:31 np0005634532 python3[74704]: ansible-ansible.builtin.stat Invoked with path=/tmp/cephadm_registry.json follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:40:31 np0005634532 python3[74730]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab --config /home/ceph-admin/assimilate_ceph.conf \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:40:31 np0005634532 systemd-logind[832]: New session 19 of user ceph-admin.
Mar  1 04:40:31 np0005634532 systemd[1]: Created slice User Slice of UID 42477.
Mar  1 04:40:31 np0005634532 systemd[1]: Starting User Runtime Directory /run/user/42477...
Mar  1 04:40:31 np0005634532 systemd[1]: Finished User Runtime Directory /run/user/42477.
Mar  1 04:40:31 np0005634532 systemd[1]: Starting User Manager for UID 42477...
Mar  1 04:40:31 np0005634532 systemd[74738]: Queued start job for default target Main User Target.
Mar  1 04:40:31 np0005634532 systemd[74738]: Created slice User Application Slice.
Mar  1 04:40:31 np0005634532 systemd[74738]: Started Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:40:31 np0005634532 systemd[74738]: Started Daily Cleanup of User's Temporary Directories.
Mar  1 04:40:31 np0005634532 systemd[74738]: Reached target Paths.
Mar  1 04:40:31 np0005634532 systemd[74738]: Reached target Timers.
Mar  1 04:40:31 np0005634532 systemd[74738]: Starting D-Bus User Message Bus Socket...
Mar  1 04:40:31 np0005634532 systemd[74738]: Starting Create User's Volatile Files and Directories...
Mar  1 04:40:31 np0005634532 systemd[74738]: Finished Create User's Volatile Files and Directories.
Mar  1 04:40:31 np0005634532 systemd[74738]: Listening on D-Bus User Message Bus Socket.
Mar  1 04:40:31 np0005634532 systemd[74738]: Reached target Sockets.
Mar  1 04:40:31 np0005634532 systemd[74738]: Reached target Basic System.
Mar  1 04:40:31 np0005634532 systemd[74738]: Reached target Main User Target.
Mar  1 04:40:31 np0005634532 systemd[74738]: Startup finished in 128ms.
Mar  1 04:40:31 np0005634532 systemd[1]: Started User Manager for UID 42477.
Mar  1 04:40:31 np0005634532 systemd[1]: Started Session 19 of User ceph-admin.
Mar  1 04:40:32 np0005634532 systemd[1]: session-19.scope: Deactivated successfully.
Mar  1 04:40:32 np0005634532 systemd-logind[832]: Session 19 logged out. Waiting for processes to exit.
Mar  1 04:40:32 np0005634532 systemd-logind[832]: Removed session 19.
Mar  1 04:40:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:34 np0005634532 systemd[1]: var-lib-containers-storage-overlay-compat3371034525-lower\x2dmapped.mount: Deactivated successfully.
Mar  1 04:40:42 np0005634532 systemd[1]: Stopping User Manager for UID 42477...
Mar  1 04:40:42 np0005634532 systemd[74738]: Activating special unit Exit the Session...
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped target Main User Target.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped target Basic System.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped target Paths.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped target Sockets.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped target Timers.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped Daily Cleanup of User's Temporary Directories.
Mar  1 04:40:42 np0005634532 systemd[74738]: Closed D-Bus User Message Bus Socket.
Mar  1 04:40:42 np0005634532 systemd[74738]: Stopped Create User's Volatile Files and Directories.
Mar  1 04:40:42 np0005634532 systemd[74738]: Removed slice User Application Slice.
Mar  1 04:40:42 np0005634532 systemd[74738]: Reached target Shutdown.
Mar  1 04:40:42 np0005634532 systemd[74738]: Finished Exit the Session.
Mar  1 04:40:42 np0005634532 systemd[74738]: Reached target Exit the Session.
Mar  1 04:40:42 np0005634532 systemd[1]: user@42477.service: Deactivated successfully.
Mar  1 04:40:42 np0005634532 systemd[1]: Stopped User Manager for UID 42477.
Mar  1 04:40:42 np0005634532 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Mar  1 04:40:42 np0005634532 systemd[1]: run-user-42477.mount: Deactivated successfully.
Mar  1 04:40:42 np0005634532 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Mar  1 04:40:42 np0005634532 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Mar  1 04:40:42 np0005634532 systemd[1]: Removed slice User Slice of UID 42477.
Mar  1 04:40:48 np0005634532 podman[74831]: 2026-03-01 09:40:48.265613726 +0000 UTC m=+15.898246071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.350797515 +0000 UTC m=+0.059978466 container create 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:40:48 np0005634532 systemd[1]: Created slice Virtual Machine and Container Slice.
Mar  1 04:40:48 np0005634532 systemd[1]: Started libpod-conmon-1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c.scope.
Mar  1 04:40:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.327620651 +0000 UTC m=+0.036801472 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.440049617 +0000 UTC m=+0.149230428 container init 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.45028998 +0000 UTC m=+0.159470711 container start 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.455093699 +0000 UTC m=+0.164274620 container attach 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:48 np0005634532 fervent_lamarr[74916]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c.scope: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.551825206 +0000 UTC m=+0.261005937 container died 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:40:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8662422e43fd3bfe5fd3fb65007e5bce56bf8e026e43411eb316d1f4aaccc9c5-merged.mount: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74898]: 2026-03-01 09:40:48.584622069 +0000 UTC m=+0.293802800 container remove 1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c (image=quay.io/ceph/ceph:v19, name=fervent_lamarr, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-conmon-1de71310b1f7951674340015c4d4f8ee42a2c458d57e1d5938e0b29963bcd33c.scope: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.646256396 +0000 UTC m=+0.042820832 container create d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:40:48 np0005634532 systemd[1]: Started libpod-conmon-d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2.scope.
Mar  1 04:40:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.706798306 +0000 UTC m=+0.103362812 container init d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.711225675 +0000 UTC m=+0.107790121 container start d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:48 np0005634532 heuristic_borg[74950]: 167 167
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2.scope: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.716074305 +0000 UTC m=+0.112638751 container attach d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.717110471 +0000 UTC m=+0.113674887 container died d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.624298792 +0000 UTC m=+0.020863218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:48 np0005634532 podman[74932]: 2026-03-01 09:40:48.759870451 +0000 UTC m=+0.156434897 container remove d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2 (image=quay.io/ceph/ceph:v19, name=heuristic_borg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-conmon-d4001ad57d5f618952d8c6082b55c3a7a68a08e09605ea13db59d4d04b9103d2.scope: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.836045718 +0000 UTC m=+0.055766463 container create 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:40:48 np0005634532 systemd[1]: Started libpod-conmon-64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f.scope.
Mar  1 04:40:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.891264496 +0000 UTC m=+0.110985301 container init 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.895732727 +0000 UTC m=+0.115453502 container start 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.899525021 +0000 UTC m=+0.119245846 container attach 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.808633369 +0000 UTC m=+0.028354124 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:48 np0005634532 sharp_wiles[74984]: AQCgCaRpfrQXNxAA2fS84gp8Qt9TLyEBE6gvGg==
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f.scope: Deactivated successfully.
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.928280523 +0000 UTC m=+0.148001288 container died 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:40:48 np0005634532 podman[74968]: 2026-03-01 09:40:48.965741201 +0000 UTC m=+0.185461966 container remove 64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f (image=quay.io/ceph/ceph:v19, name=sharp_wiles, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Mar  1 04:40:48 np0005634532 systemd[1]: libpod-conmon-64620f1e38494e0cbd86cf022703e684322a448031f803016c7064e8a3ca9e8f.scope: Deactivated successfully.
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.024106967 +0000 UTC m=+0.040013222 container create bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:40:49 np0005634532 systemd[1]: Started libpod-conmon-bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3.scope.
Mar  1 04:40:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.083929959 +0000 UTC m=+0.099836234 container init bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.090597475 +0000 UTC m=+0.106503740 container start bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.096705656 +0000 UTC m=+0.112611921 container attach bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.006646385 +0000 UTC m=+0.022552680 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:49 np0005634532 sharp_swanson[75021]: AQChCaRpJqWuBhAA+KuvxR0lYiEJVNqoHUOBaw==
Mar  1 04:40:49 np0005634532 systemd[1]: libpod-bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3.scope: Deactivated successfully.
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.114695232 +0000 UTC m=+0.130601487 container died bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:40:49 np0005634532 podman[75005]: 2026-03-01 09:40:49.150203331 +0000 UTC m=+0.166109586 container remove bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3 (image=quay.io/ceph/ceph:v19, name=sharp_swanson, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:49 np0005634532 systemd[1]: libpod-conmon-bf6bef9d7999bed22bbe931f49f2377d82cba47b9cd5e44ef95b8f4965b245c3.scope: Deactivated successfully.
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.210861884 +0000 UTC m=+0.043410986 container create 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:40:49 np0005634532 systemd[1]: Started libpod-conmon-988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc.scope.
Mar  1 04:40:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.18887555 +0000 UTC m=+0.021424642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.296448675 +0000 UTC m=+0.128997777 container init 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.303410647 +0000 UTC m=+0.135959699 container start 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:40:49 np0005634532 agitated_kirch[75058]: AQChCaRp9yk+ExAAzF+y5P5GL3tJEo13SNfELQ==
Mar  1 04:40:49 np0005634532 systemd[1]: libpod-988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc.scope: Deactivated successfully.
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.453712931 +0000 UTC m=+0.286262183 container attach 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:49 np0005634532 podman[75041]: 2026-03-01 09:40:49.454320826 +0000 UTC m=+0.286869928 container died 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:40:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-444fa9d5aae7cace748c4004a062af36a8bf1fd18f2f8f689d7e58b37ff141ab-merged.mount: Deactivated successfully.
Mar  1 04:40:50 np0005634532 podman[75041]: 2026-03-01 09:40:50.718848136 +0000 UTC m=+1.551397198 container remove 988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc (image=quay.io/ceph/ceph:v19, name=agitated_kirch, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:40:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.7795289 +0000 UTC m=+0.042275399 container create 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:40:50 np0005634532 systemd[1]: libpod-conmon-988b1d1e21cbd510590a2e5021367447cad3c8ab092a2f8fd1903c7fb63f4dcc.scope: Deactivated successfully.
Mar  1 04:40:50 np0005634532 systemd[1]: Started libpod-conmon-55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432.scope.
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.759865243 +0000 UTC m=+0.022611542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:50 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0865d558c084e3b92d9dcea4bc5e0d0175b4d4c81a25e328577c08ef4de2934/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.881484245 +0000 UTC m=+0.144230504 container init 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.888828078 +0000 UTC m=+0.151574337 container start 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.892489238 +0000 UTC m=+0.155235497 container attach 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:40:50 np0005634532 dreamy_bassi[75095]: /usr/bin/monmaptool: monmap file /tmp/monmap
Mar  1 04:40:50 np0005634532 dreamy_bassi[75095]: setting min_mon_release = quincy
Mar  1 04:40:50 np0005634532 dreamy_bassi[75095]: /usr/bin/monmaptool: set fsid to 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:50 np0005634532 dreamy_bassi[75095]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Mar  1 04:40:50 np0005634532 systemd[1]: libpod-55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432.scope: Deactivated successfully.
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.924605994 +0000 UTC m=+0.187352293 container died 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:40:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b0865d558c084e3b92d9dcea4bc5e0d0175b4d4c81a25e328577c08ef4de2934-merged.mount: Deactivated successfully.
Mar  1 04:40:50 np0005634532 podman[75079]: 2026-03-01 09:40:50.961293863 +0000 UTC m=+0.224040132 container remove 55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432 (image=quay.io/ceph/ceph:v19, name=dreamy_bassi, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:50 np0005634532 systemd[1]: libpod-conmon-55624926f305ef97aadf5d380604fd5eb6b5487582c35345154e485947634432.scope: Deactivated successfully.
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.030273082 +0000 UTC m=+0.049833416 container create 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:40:51 np0005634532 systemd[1]: Started libpod-conmon-26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938.scope.
Mar  1 04:40:51 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574659f1050648f5006d3cbafa9d126a899fe0a50191946332a6e1a5fbc14187/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574659f1050648f5006d3cbafa9d126a899fe0a50191946332a6e1a5fbc14187/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574659f1050648f5006d3cbafa9d126a899fe0a50191946332a6e1a5fbc14187/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/574659f1050648f5006d3cbafa9d126a899fe0a50191946332a6e1a5fbc14187/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.007422896 +0000 UTC m=+0.026983240 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.112827308 +0000 UTC m=+0.132387652 container init 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.116906149 +0000 UTC m=+0.136466473 container start 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.120451266 +0000 UTC m=+0.140011630 container attach 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:40:51 np0005634532 systemd[1]: libpod-26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938.scope: Deactivated successfully.
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.204822487 +0000 UTC m=+0.224382811 container died 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:40:51 np0005634532 podman[75114]: 2026-03-01 09:40:51.242339296 +0000 UTC m=+0.261899620 container remove 26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938 (image=quay.io/ceph/ceph:v19, name=charming_hermann, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:40:51 np0005634532 systemd[1]: libpod-conmon-26805ab27b2fc6a762f3540d966cf55596b918d4b9d83c6bc18b33744b98c938.scope: Deactivated successfully.
Mar  1 04:40:51 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:51 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:51 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:51 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:51 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:51 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:51 np0005634532 systemd[1]: Reached target All Ceph clusters and services.
Mar  1 04:40:51 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:51 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:51 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:51 np0005634532 systemd[1]: Reached target Ceph cluster 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:52 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:52 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:52 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:52 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:52 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:52 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:52 np0005634532 systemd[1]: Created slice Slice /system/ceph-437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:52 np0005634532 systemd[1]: Reached target System Time Set.
Mar  1 04:40:52 np0005634532 systemd[1]: Reached target System Time Synchronized.
Mar  1 04:40:52 np0005634532 systemd[1]: Starting Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:40:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:52 np0005634532 podman[75443]: 2026-03-01 09:40:52.791975579 +0000 UTC m=+0.041952660 container create f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:40:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b68fceb9c5ff53b2ac687f3d51e4e19be1eb1d0bfab05168f5a2030183052c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b68fceb9c5ff53b2ac687f3d51e4e19be1eb1d0bfab05168f5a2030183052c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b68fceb9c5ff53b2ac687f3d51e4e19be1eb1d0bfab05168f5a2030183052c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/91b68fceb9c5ff53b2ac687f3d51e4e19be1eb1d0bfab05168f5a2030183052c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:52 np0005634532 podman[75443]: 2026-03-01 09:40:52.864121476 +0000 UTC m=+0.114098587 container init f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 04:40:52 np0005634532 podman[75443]: 2026-03-01 09:40:52.77465492 +0000 UTC m=+0.024636341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:52 np0005634532 podman[75443]: 2026-03-01 09:40:52.871028127 +0000 UTC m=+0.121005208 container start f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:40:52 np0005634532 bash[75443]: f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad
Mar  1 04:40:52 np0005634532 systemd[1]: Started Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: pidfile_write: ignore empty --pid-file
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: load: jerasure load: lrc 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: RocksDB version: 7.9.2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Git sha 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Compile date 2025-07-17 03:12:14
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: DB SUMMARY
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: DB Session ID:  8YGEU9IJJSLJ1GAK4OG0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: CURRENT file:  CURRENT
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: IDENTITY file:  IDENTITY
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                         Options.error_if_exists: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.create_if_missing: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                         Options.paranoid_checks: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.flush_verify_memtable_count: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                                     Options.env: 0x562070deec20
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                                      Options.fs: PosixFileSystem
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                                Options.info_log: 0x562071e12d60
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.max_file_opening_threads: 16
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                              Options.statistics: (nil)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                               Options.use_fsync: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.max_log_file_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.log_file_time_to_roll: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.keep_log_file_num: 1000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                    Options.recycle_log_file_num: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                         Options.allow_fallocate: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.allow_mmap_reads: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.allow_mmap_writes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.use_direct_reads: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.create_missing_column_families: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                              Options.db_log_dir: 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                                 Options.wal_dir: 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.table_cache_numshardbits: 6
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                         Options.WAL_ttl_seconds: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.WAL_size_limit_MB: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.manifest_preallocation_size: 4194304
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                     Options.is_fd_close_on_exec: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.advise_random_on_open: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                    Options.db_write_buffer_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                    Options.write_buffer_manager: 0x562071e17900
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.access_hint_on_compaction_start: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                      Options.use_adaptive_mutex: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                            Options.rate_limiter: (nil)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.wal_recovery_mode: 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.enable_thread_tracking: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.enable_pipelined_write: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.unordered_write: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.write_thread_max_yield_usec: 100
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                               Options.row_cache: None
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                              Options.wal_filter: None
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.avoid_flush_during_recovery: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.allow_ingest_behind: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.two_write_queues: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.manual_wal_flush: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.wal_compression: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.atomic_flush: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.persist_stats_to_disk: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.write_dbid_to_manifest: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.log_readahead_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.best_efforts_recovery: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.allow_data_in_errors: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.db_host_id: __hostname__
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.enforce_single_del_contracts: true
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.max_background_jobs: 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.max_background_compactions: -1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.max_subcompactions: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.delayed_write_rate : 16777216
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.max_total_wal_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.stats_dump_period_sec: 600
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.stats_persist_period_sec: 600
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                          Options.max_open_files: -1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                          Options.bytes_per_sync: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                      Options.wal_bytes_per_sync: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.strict_bytes_per_sync: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:       Options.compaction_readahead_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.max_background_flushes: -1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Compression algorithms supported:
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kZSTD supported: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kXpressCompression supported: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kBZip2Compression supported: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kLZ4Compression supported: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kZlibCompression supported: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kLZ4HCCompression supported: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: #011kSnappyCompression supported: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Fast CRC32 supported: Supported on x86
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: DMutex implementation: pthread_mutex_t
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:           Options.merge_operator: 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:        Options.compaction_filter: None
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x562071e12500)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x562071e37350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:        Options.write_buffer_size: 33554432
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:  Options.max_write_buffer_number: 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.compression: NoCompression
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.num_levels: 7
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d85a3bc5-3dc5-432f-9fab-fa926ce32d3d
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358052915330, "job": 1, "event": "recovery_started", "wal_files": [4]}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358052917754, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "8YGEU9IJJSLJ1GAK4OG0", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358052917920, "job": 1, "event": "recovery_finished"}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x562071e38e00
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: DB pointer 0x562071f42000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.8      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.10 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x562071e37350#2 capacity: 512.00 MB usage: 1.17 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(2,0.95 KB,0.000181794%)#012#012** File Read Latency Histogram By Level [default] **
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@-1(???) e0 preinit fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(probing) e0 win_standalone_election
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(probing) e1 win_standalone_election
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: paxos.0).electionLogic(2) init, last seen epoch 2
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : monmap epoch 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : last_changed 2026-03-01T09:40:50.920361+0000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : created 2026-03-01T09:40:50.920361+0000
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : election_strategy: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v19,cpu=AMD EPYC-Rome Processor,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864280,os=Linux}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout,16=squid ondisk layout}
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).mds e1 new map
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).mds e1 print_map#012e1#012btime 2026-03-01T09:40:52:961395+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : fsmap 
Mar  1 04:40:52 np0005634532 podman[75463]: 2026-03-01 09:40:52.96920299 +0000 UTC m=+0.060971872 container create f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mkfs 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Mar  1 04:40:52 np0005634532 ceph-mon[75462]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:53 np0005634532 systemd[1]: Started libpod-conmon-f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1.scope.
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:52.946137308 +0000 UTC m=+0.037906270 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e84121b7b2bcb79e12543dcac87514009a104f55aa4f59eb4acf26d5fd1ae7/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e84121b7b2bcb79e12543dcac87514009a104f55aa4f59eb4acf26d5fd1ae7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/63e84121b7b2bcb79e12543dcac87514009a104f55aa4f59eb4acf26d5fd1ae7/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:53.079292087 +0000 UTC m=+0.171061039 container init f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:53.086308441 +0000 UTC m=+0.178077363 container start f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:53.090343471 +0000 UTC m=+0.182112463 container attach f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/947982315' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:  cluster:
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    id:     437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    health: HEALTH_OK
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]: 
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:  services:
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    mon: 1 daemons, quorum compute-0 (age 0.343993s)
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    mgr: no daemons active
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    osd: 0 osds: 0 up, 0 in
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]: 
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:  data:
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    pools:   0 pools, 0 pgs
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    objects: 0 objects, 0 B
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    usage:   0 B used, 0 B / 0 B avail
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]:    pgs:     
Mar  1 04:40:53 np0005634532 nervous_meitner[75519]: 
Mar  1 04:40:53 np0005634532 systemd[1]: libpod-f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1.scope: Deactivated successfully.
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:53.3179156 +0000 UTC m=+0.409684512 container died f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:53 np0005634532 podman[75463]: 2026-03-01 09:40:53.371669921 +0000 UTC m=+0.463438813 container remove f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1 (image=quay.io/ceph/ceph:v19, name=nervous_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 04:40:53 np0005634532 systemd[1]: libpod-conmon-f4d5cc2e4db9c1ce3c676be7c5e30b1b7e145aff9dc981c90a45c22b69e5adf1.scope: Deactivated successfully.
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.447656204 +0000 UTC m=+0.052249695 container create b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:40:53 np0005634532 systemd[1]: Started libpod-conmon-b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce.scope.
Mar  1 04:40:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0e6948378b46d528f2ac34e24c54bc67e6c7b84d6fa6064b6a05ccfbd079b1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0e6948378b46d528f2ac34e24c54bc67e6c7b84d6fa6064b6a05ccfbd079b1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0e6948378b46d528f2ac34e24c54bc67e6c7b84d6fa6064b6a05ccfbd079b1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0e6948378b46d528f2ac34e24c54bc67e6c7b84d6fa6064b6a05ccfbd079b1/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.421758322 +0000 UTC m=+0.026351893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.533225944 +0000 UTC m=+0.137819475 container init b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.540939115 +0000 UTC m=+0.145532626 container start b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.545145599 +0000 UTC m=+0.149739130 container attach b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/548832422' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/548832422' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Mar  1 04:40:53 np0005634532 exciting_hawking[75574]: 
Mar  1 04:40:53 np0005634532 exciting_hawking[75574]: [global]
Mar  1 04:40:53 np0005634532 exciting_hawking[75574]: #011fsid = 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:53 np0005634532 exciting_hawking[75574]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Mar  1 04:40:53 np0005634532 systemd[1]: libpod-b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce.scope: Deactivated successfully.
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.794412375 +0000 UTC m=+0.399005896 container died b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Mar  1 04:40:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7a0e6948378b46d528f2ac34e24c54bc67e6c7b84d6fa6064b6a05ccfbd079b1-merged.mount: Deactivated successfully.
Mar  1 04:40:53 np0005634532 podman[75557]: 2026-03-01 09:40:53.8438324 +0000 UTC m=+0.448425921 container remove b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce (image=quay.io/ceph/ceph:v19, name=exciting_hawking, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:53 np0005634532 systemd[1]: libpod-conmon-b7b3aafc15396379e55fb7953b527425a0fc830e0084579e6216ee8037abc0ce.scope: Deactivated successfully.
Mar  1 04:40:53 np0005634532 podman[75612]: 2026-03-01 09:40:53.908708707 +0000 UTC m=+0.047084348 container create 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:40:53 np0005634532 systemd[1]: Started libpod-conmon-3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c.scope.
Mar  1 04:40:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:53 np0005634532 podman[75612]: 2026-03-01 09:40:53.887783529 +0000 UTC m=+0.026159160 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d113eb6f7e5be7e48a2fc251998d5d72018fdf88bd7a0bb5052e4d1cf2f1ce5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d113eb6f7e5be7e48a2fc251998d5d72018fdf88bd7a0bb5052e4d1cf2f1ce5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d113eb6f7e5be7e48a2fc251998d5d72018fdf88bd7a0bb5052e4d1cf2f1ce5c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d113eb6f7e5be7e48a2fc251998d5d72018fdf88bd7a0bb5052e4d1cf2f1ce5c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: from='client.? 192.168.122.100:0/548832422' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:40:53 np0005634532 ceph-mon[75462]: from='client.? 192.168.122.100:0/548832422' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Mar  1 04:40:54 np0005634532 podman[75612]: 2026-03-01 09:40:54.00531294 +0000 UTC m=+0.143688621 container init 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Mar  1 04:40:54 np0005634532 podman[75612]: 2026-03-01 09:40:54.020180069 +0000 UTC m=+0.158555700 container start 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:54 np0005634532 podman[75612]: 2026-03-01 09:40:54.024857205 +0000 UTC m=+0.163232836 container attach 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2147017079' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:40:54 np0005634532 systemd[1]: libpod-3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c.scope: Deactivated successfully.
Mar  1 04:40:54 np0005634532 podman[75612]: 2026-03-01 09:40:54.23093111 +0000 UTC m=+0.369306731 container died 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d113eb6f7e5be7e48a2fc251998d5d72018fdf88bd7a0bb5052e4d1cf2f1ce5c-merged.mount: Deactivated successfully.
Mar  1 04:40:54 np0005634532 podman[75612]: 2026-03-01 09:40:54.266889781 +0000 UTC m=+0.405265412 container remove 3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c (image=quay.io/ceph/ceph:v19, name=zen_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 04:40:54 np0005634532 systemd[1]: libpod-conmon-3fe461e0dc478d27148f2a08c4321c33e654b086659922f11f965b8873eb977c.scope: Deactivated successfully.
Mar  1 04:40:54 np0005634532 systemd[1]: Stopping Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: mon.compute-0@0(leader) e1 shutdown
Mar  1 04:40:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0[75458]: 2026-03-01T09:40:54.438+0000 7f0a27d8b640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Mar  1 04:40:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0[75458]: 2026-03-01T09:40:54.438+0000 7f0a27d8b640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Mar  1 04:40:54 np0005634532 ceph-mon[75462]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Mar  1 04:40:54 np0005634532 podman[75698]: 2026-03-01 09:40:54.581787213 +0000 UTC m=+0.181851276 container died f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:40:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-91b68fceb9c5ff53b2ac687f3d51e4e19be1eb1d0bfab05168f5a2030183052c-merged.mount: Deactivated successfully.
Mar  1 04:40:54 np0005634532 podman[75698]: 2026-03-01 09:40:54.61957593 +0000 UTC m=+0.219639953 container remove f6803567600ecdb1d06281d302ef36c7ec7053783611b6a4c0de944d2340d4ad (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:54 np0005634532 bash[75698]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0
Mar  1 04:40:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:54 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@mon.compute-0.service: Deactivated successfully.
Mar  1 04:40:54 np0005634532 systemd[1]: Stopped Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:54 np0005634532 systemd[1]: Starting Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:40:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Mar  1 04:40:54 np0005634532 podman[75804]: 2026-03-01 09:40:54.97115336 +0000 UTC m=+0.046675577 container create 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecc5e585d750199c01f4f6d2664f163baf3582ba0bb491198b406a4f910d792/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecc5e585d750199c01f4f6d2664f163baf3582ba0bb491198b406a4f910d792/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecc5e585d750199c01f4f6d2664f163baf3582ba0bb491198b406a4f910d792/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fecc5e585d750199c01f4f6d2664f163baf3582ba0bb491198b406a4f910d792/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 podman[75804]: 2026-03-01 09:40:55.036586201 +0000 UTC m=+0.112108448 container init 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:40:55 np0005634532 podman[75804]: 2026-03-01 09:40:55.049370878 +0000 UTC m=+0.124893095 container start 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 04:40:55 np0005634532 podman[75804]: 2026-03-01 09:40:54.953105383 +0000 UTC m=+0.028627610 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:55 np0005634532 bash[75804]: 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392
Mar  1 04:40:55 np0005634532 systemd[1]: Started Ceph mon.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mon, pid 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: pidfile_write: ignore empty --pid-file
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: load: jerasure load: lrc 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: RocksDB version: 7.9.2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Git sha 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Compile date 2025-07-17 03:12:14
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: DB SUMMARY
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: DB Session ID:  FJWJGIYC2V5ZEQGX709M
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: CURRENT file:  CURRENT
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: IDENTITY file:  IDENTITY
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 58731 ; 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                         Options.error_if_exists: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.create_if_missing: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                         Options.paranoid_checks: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.flush_verify_memtable_count: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                                     Options.env: 0x563d92fa3c20
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                                      Options.fs: PosixFileSystem
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                                Options.info_log: 0x563d94b5dac0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.max_file_opening_threads: 16
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                              Options.statistics: (nil)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                               Options.use_fsync: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.max_log_file_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.log_file_time_to_roll: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.keep_log_file_num: 1000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                    Options.recycle_log_file_num: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                         Options.allow_fallocate: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.allow_mmap_reads: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.allow_mmap_writes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.use_direct_reads: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.create_missing_column_families: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                              Options.db_log_dir: 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                                 Options.wal_dir: 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.table_cache_numshardbits: 6
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                         Options.WAL_ttl_seconds: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.WAL_size_limit_MB: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.manifest_preallocation_size: 4194304
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                     Options.is_fd_close_on_exec: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.advise_random_on_open: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                    Options.db_write_buffer_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                    Options.write_buffer_manager: 0x563d94b61900
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.access_hint_on_compaction_start: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                      Options.use_adaptive_mutex: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                            Options.rate_limiter: (nil)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.wal_recovery_mode: 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.enable_thread_tracking: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.enable_pipelined_write: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.unordered_write: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.write_thread_max_yield_usec: 100
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                               Options.row_cache: None
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                              Options.wal_filter: None
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.avoid_flush_during_recovery: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.allow_ingest_behind: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.two_write_queues: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.manual_wal_flush: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.wal_compression: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.atomic_flush: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.persist_stats_to_disk: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.write_dbid_to_manifest: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.log_readahead_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.best_efforts_recovery: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.allow_data_in_errors: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.db_host_id: __hostname__
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.enforce_single_del_contracts: true
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.max_background_jobs: 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.max_background_compactions: -1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.max_subcompactions: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.delayed_write_rate : 16777216
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.max_total_wal_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.stats_dump_period_sec: 600
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.stats_persist_period_sec: 600
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                          Options.max_open_files: -1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                          Options.bytes_per_sync: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                      Options.wal_bytes_per_sync: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.strict_bytes_per_sync: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:       Options.compaction_readahead_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.max_background_flushes: -1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Compression algorithms supported:
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kZSTD supported: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kXpressCompression supported: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kBZip2Compression supported: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kLZ4Compression supported: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kZlibCompression supported: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kLZ4HCCompression supported: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: #011kSnappyCompression supported: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Fast CRC32 supported: Supported on x86
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: DMutex implementation: pthread_mutex_t
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:           Options.merge_operator: 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:        Options.compaction_filter: None
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x563d94b5caa0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x563d94b81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:        Options.write_buffer_size: 33554432
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:  Options.max_write_buffer_number: 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.compression: NoCompression
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.num_levels: 7
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: d85a3bc5-3dc5-432f-9fab-fa926ce32d3d
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358055084119, "job": 1, "event": "recovery_started", "wal_files": [9]}
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358055087539, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 58482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 137, "table_properties": {"data_size": 56956, "index_size": 168, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 325, "raw_key_size": 3182, "raw_average_key_size": 30, "raw_value_size": 54473, "raw_average_value_size": 523, "num_data_blocks": 9, "num_entries": 104, "num_filter_entries": 104, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358055, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358055087621, "job": 1, "event": "recovery_finished"}
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x563d94b82e00
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: DB pointer 0x563d94c8c000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   59.01 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Sum      2/0   59.01 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.20 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.20 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d94b81350#2 capacity: 512.00 MB usage: 26.45 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,25.61 KB,0.0048846%) FilterBlock(2,0.48 KB,9.23872e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???) e1 preinit fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).mds e1 new map
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).mds e1 print_map#012e1#012btime 2026-03-01T09:40:52:961395+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: -1#012 #012No filesystems configured
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(probing) e1 win_standalone_election
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : monmap epoch 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : last_changed 2026-03-01T09:40:50.920361+0000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : created 2026-03-01T09:40:50.920361+0000
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : election_strategy: 1
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap 
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Mar  1 04:40:55 np0005634532 podman[75826]: 2026-03-01 09:40:55.133564784 +0000 UTC m=+0.052183434 container create 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Mar  1 04:40:55 np0005634532 systemd[1]: Started libpod-conmon-0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955.scope.
Mar  1 04:40:55 np0005634532 podman[75826]: 2026-03-01 09:40:55.112366299 +0000 UTC m=+0.030984929 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:55 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21755eaa1d76d80cdc901545c2eaff4ef72af600f5f65d2cccdf3bf760600a19/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21755eaa1d76d80cdc901545c2eaff4ef72af600f5f65d2cccdf3bf760600a19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/21755eaa1d76d80cdc901545c2eaff4ef72af600f5f65d2cccdf3bf760600a19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 podman[75826]: 2026-03-01 09:40:55.25045426 +0000 UTC m=+0.169072970 container init 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:40:55 np0005634532 podman[75826]: 2026-03-01 09:40:55.25608726 +0000 UTC m=+0.174705910 container start 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:55 np0005634532 podman[75826]: 2026-03-01 09:40:55.259518415 +0000 UTC m=+0.178137135 container attach 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0)
Mar  1 04:40:55 np0005634532 systemd[1]: libpod-0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955.scope: Deactivated successfully.
Mar  1 04:40:55 np0005634532 podman[75906]: 2026-03-01 09:40:55.510241286 +0000 UTC m=+0.034737792 container died 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:40:55 np0005634532 podman[75906]: 2026-03-01 09:40:55.548960185 +0000 UTC m=+0.073456681 container remove 0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955 (image=quay.io/ceph/ceph:v19, name=clever_bartik, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:40:55 np0005634532 systemd[1]: libpod-conmon-0552d95cc404bad326ddf021b9a3f257909014d8ce22be1dc8171dd3d91d0955.scope: Deactivated successfully.
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.618937309 +0000 UTC m=+0.044418022 container create 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:40:55 np0005634532 systemd[1]: Started libpod-conmon-6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb.scope.
Mar  1 04:40:55 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719bebdfe95b8cc090053314a24f6eaf37b2fce4800ed7f4570fb959eebf1d91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719bebdfe95b8cc090053314a24f6eaf37b2fce4800ed7f4570fb959eebf1d91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/719bebdfe95b8cc090053314a24f6eaf37b2fce4800ed7f4570fb959eebf1d91/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.691819294 +0000 UTC m=+0.117300057 container init 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.596630656 +0000 UTC m=+0.022111429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.696304415 +0000 UTC m=+0.121785098 container start 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.699555476 +0000 UTC m=+0.125036219 container attach 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 04:40:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0)
Mar  1 04:40:55 np0005634532 systemd[1]: libpod-6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb.scope: Deactivated successfully.
Mar  1 04:40:55 np0005634532 podman[75921]: 2026-03-01 09:40:55.976045116 +0000 UTC m=+0.401525799 container died 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:40:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-719bebdfe95b8cc090053314a24f6eaf37b2fce4800ed7f4570fb959eebf1d91-merged.mount: Deactivated successfully.
Mar  1 04:40:56 np0005634532 podman[75921]: 2026-03-01 09:40:56.011855684 +0000 UTC m=+0.437336357 container remove 6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb (image=quay.io/ceph/ceph:v19, name=brave_curran, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:40:56 np0005634532 systemd[1]: libpod-conmon-6726f6689b313d44e4a85c8e04b4353cb9b4b5b09e624c03ef6b36760c2383fb.scope: Deactivated successfully.
Mar  1 04:40:56 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:56 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:56 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:56 np0005634532 systemd[1]: Reloading.
Mar  1 04:40:56 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:40:56 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:40:56 np0005634532 systemd[1]: Starting Ceph mgr.compute-0.ebwufc for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:40:56 np0005634532 podman[76115]: 2026-03-01 09:40:56.808289066 +0000 UTC m=+0.058924091 container create 676788cabaabc7dfea6441d34ead97fc73a6c31e370bf854e7fea7e7753b99d6 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:40:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1734b870bcc36e24f4db3f79872ac74b133c5e40811dbfcf9fbb4e1f1f7229/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1734b870bcc36e24f4db3f79872ac74b133c5e40811dbfcf9fbb4e1f1f7229/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1734b870bcc36e24f4db3f79872ac74b133c5e40811dbfcf9fbb4e1f1f7229/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d1734b870bcc36e24f4db3f79872ac74b133c5e40811dbfcf9fbb4e1f1f7229/merged/var/lib/ceph/mgr/ceph-compute-0.ebwufc supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:56 np0005634532 podman[76115]: 2026-03-01 09:40:56.869715918 +0000 UTC m=+0.120350973 container init 676788cabaabc7dfea6441d34ead97fc73a6c31e370bf854e7fea7e7753b99d6 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Mar  1 04:40:56 np0005634532 podman[76115]: 2026-03-01 09:40:56.783153533 +0000 UTC m=+0.033788608 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:56 np0005634532 podman[76115]: 2026-03-01 09:40:56.883648353 +0000 UTC m=+0.134283378 container start 676788cabaabc7dfea6441d34ead97fc73a6c31e370bf854e7fea7e7753b99d6 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:40:56 np0005634532 bash[76115]: 676788cabaabc7dfea6441d34ead97fc73a6c31e370bf854e7fea7e7753b99d6
Mar  1 04:40:56 np0005634532 systemd[1]: Started Ceph mgr.compute-0.ebwufc for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:40:56 np0005634532 ceph-mgr[76134]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:40:56 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:40:56 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:40:56 np0005634532 podman[76135]: 2026-03-01 09:40:56.985936407 +0000 UTC m=+0.061889664 container create ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:40:57 np0005634532 systemd[1]: Started libpod-conmon-ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143.scope.
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:56.959353379 +0000 UTC m=+0.035306696 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc41ce54761b6d7f151a809aaa3cd60dac5380dad92735448e5c9c07557fada/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc41ce54761b6d7f151a809aaa3cd60dac5380dad92735448e5c9c07557fada/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bc41ce54761b6d7f151a809aaa3cd60dac5380dad92735448e5c9c07557fada/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:57.084791676 +0000 UTC m=+0.160744923 container init ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:57.093186944 +0000 UTC m=+0.169140171 container start ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:57.09665354 +0000 UTC m=+0.172606767 container attach ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:40:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:57.102+0000 7f5026b31140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:40:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:57.198+0000 7f5026b31140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:40:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Mar  1 04:40:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/351161391' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Mar  1 04:40:57 np0005634532 magical_colden[76171]: 
Mar  1 04:40:57 np0005634532 magical_colden[76171]: {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "health": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "status": "HEALTH_OK",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "checks": {},
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "mutes": []
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "election_epoch": 5,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "quorum": [
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        0
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    ],
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "quorum_names": [
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "compute-0"
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    ],
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "quorum_age": 2,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "monmap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "epoch": 1,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "min_mon_release_name": "squid",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_mons": 1
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "osdmap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "epoch": 1,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_osds": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_up_osds": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "osd_up_since": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_in_osds": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "osd_in_since": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_remapped_pgs": 0
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "pgmap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "pgs_by_state": [],
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_pgs": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_pools": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_objects": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "data_bytes": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "bytes_used": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "bytes_avail": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "bytes_total": 0
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "fsmap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "epoch": 1,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "btime": "2026-03-01T09:40:52:961395+0000",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "by_rank": [],
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "up:standby": 0
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "mgrmap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "available": false,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "num_standbys": 0,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "modules": [
Mar  1 04:40:57 np0005634532 magical_colden[76171]:            "iostat",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:            "nfs",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:            "restful"
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        ],
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "services": {}
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "servicemap": {
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "epoch": 1,
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "modified": "2026-03-01T09:40:52.964363+0000",
Mar  1 04:40:57 np0005634532 magical_colden[76171]:        "services": {}
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    },
Mar  1 04:40:57 np0005634532 magical_colden[76171]:    "progress_events": {}
Mar  1 04:40:57 np0005634532 magical_colden[76171]: }
Mar  1 04:40:57 np0005634532 systemd[1]: libpod-ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143.scope: Deactivated successfully.
Mar  1 04:40:57 np0005634532 conmon[76171]: conmon ce34a0c5ba26f6c466e4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143.scope/container/memory.events
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:57.323851069 +0000 UTC m=+0.399804336 container died ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 04:40:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4bc41ce54761b6d7f151a809aaa3cd60dac5380dad92735448e5c9c07557fada-merged.mount: Deactivated successfully.
Mar  1 04:40:57 np0005634532 podman[76135]: 2026-03-01 09:40:57.372937636 +0000 UTC m=+0.448890893 container remove ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143 (image=quay.io/ceph/ceph:v19, name=magical_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:57 np0005634532 systemd[1]: libpod-conmon-ce34a0c5ba26f6c466e46796c074a09d310410d0138772a94a87d9f2e40d7143.scope: Deactivated successfully.
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:40:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:57.991+0000 7f5026b31140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:40:57 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:58.676+0000 7f5026b31140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:58.862+0000 7f5026b31140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:40:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:58.928+0000 7f5026b31140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:40:58 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:40:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:40:59.058+0000 7f5026b31140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.474838641 +0000 UTC m=+0.068097798 container create 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:40:59 np0005634532 systemd[1]: Started libpod-conmon-6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1.scope.
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.449491673 +0000 UTC m=+0.042750890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:40:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:40:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917125728f5790b24b84617327048adab946758299e93c0bf3bc1b274ab029c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917125728f5790b24b84617327048adab946758299e93c0bf3bc1b274ab029c5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/917125728f5790b24b84617327048adab946758299e93c0bf3bc1b274ab029c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.561849167 +0000 UTC m=+0.155108344 container init 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.566849531 +0000 UTC m=+0.160108688 container start 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.570706556 +0000 UTC m=+0.163965723 container attach 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:40:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Mar  1 04:40:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515316116' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]: 
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]: {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "health": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "status": "HEALTH_OK",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "checks": {},
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "mutes": []
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "election_epoch": 5,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "quorum": [
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        0
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    ],
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "quorum_names": [
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "compute-0"
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    ],
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "quorum_age": 4,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "monmap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "epoch": 1,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "min_mon_release_name": "squid",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_mons": 1
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "osdmap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "epoch": 1,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_osds": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_up_osds": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "osd_up_since": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_in_osds": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "osd_in_since": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_remapped_pgs": 0
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "pgmap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "pgs_by_state": [],
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_pgs": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_pools": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_objects": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "data_bytes": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "bytes_used": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "bytes_avail": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "bytes_total": 0
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "fsmap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "epoch": 1,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "btime": "2026-03-01T09:40:52:961395+0000",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "by_rank": [],
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "up:standby": 0
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "mgrmap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "available": false,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "num_standbys": 0,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "modules": [
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:            "iostat",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:            "nfs",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:            "restful"
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        ],
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "services": {}
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "servicemap": {
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "epoch": 1,
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "modified": "2026-03-01T09:40:52.964363+0000",
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:        "services": {}
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    },
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]:    "progress_events": {}
Mar  1 04:40:59 np0005634532 jovial_lumiere[76237]: }
Mar  1 04:40:59 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:40:59 np0005634532 systemd[1]: libpod-6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1.scope: Deactivated successfully.
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.81341986 +0000 UTC m=+0.406679037 container died 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:40:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-917125728f5790b24b84617327048adab946758299e93c0bf3bc1b274ab029c5-merged.mount: Deactivated successfully.
Mar  1 04:40:59 np0005634532 podman[76220]: 2026-03-01 09:40:59.853567885 +0000 UTC m=+0.446827042 container remove 6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1 (image=quay.io/ceph/ceph:v19, name=jovial_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:40:59 np0005634532 systemd[1]: libpod-conmon-6bd47b4a1f7b1fe621ae0f8c7dd1adcfed9fea7de05254c2e95cbc16e8aaa8a1.scope: Deactivated successfully.
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.096+0000 7f5026b31140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.346+0000 7f5026b31140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.418+0000 7f5026b31140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.494+0000 7f5026b31140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.566+0000 7f5026b31140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.630+0000 7f5026b31140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:41:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:00.944+0000 7f5026b31140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:41:00 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:41:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:01.027+0000 7f5026b31140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:41:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:01.393+0000 7f5026b31140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:41:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:01.854+0000 7f5026b31140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:41:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:01.917+0000 7f5026b31140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:41:01 np0005634532 podman[76276]: 2026-03-01 09:41:01.937617619 +0000 UTC m=+0.062853008 container create 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 04:41:01 np0005634532 systemd[1]: Started libpod-conmon-9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7.scope.
Mar  1 04:41:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:01.991+0000 7f5026b31140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:41:01 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:41:01 np0005634532 podman[76276]: 2026-03-01 09:41:01.905813971 +0000 UTC m=+0.031049420 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacec851b6770dd27a8ae36196088cb3c9e992c1b237e726cd290c6fe5a01db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacec851b6770dd27a8ae36196088cb3c9e992c1b237e726cd290c6fe5a01db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aacec851b6770dd27a8ae36196088cb3c9e992c1b237e726cd290c6fe5a01db/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:02 np0005634532 podman[76276]: 2026-03-01 09:41:02.017053447 +0000 UTC m=+0.142288866 container init 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:41:02 np0005634532 podman[76276]: 2026-03-01 09:41:02.024719597 +0000 UTC m=+0.149954986 container start 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:02 np0005634532 podman[76276]: 2026-03-01 09:41:02.028698466 +0000 UTC m=+0.153933855 container attach 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.155+0000 7f5026b31140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.224+0000 7f5026b31140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:41:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Mar  1 04:41:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127282733' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]: 
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]: {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "health": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "status": "HEALTH_OK",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "checks": {},
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "mutes": []
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "election_epoch": 5,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "quorum": [
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        0
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    ],
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "quorum_names": [
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "compute-0"
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    ],
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "quorum_age": 7,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "monmap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "epoch": 1,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "min_mon_release_name": "squid",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_mons": 1
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "osdmap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "epoch": 1,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_osds": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_up_osds": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "osd_up_since": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_in_osds": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "osd_in_since": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_remapped_pgs": 0
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "pgmap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "pgs_by_state": [],
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_pgs": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_pools": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_objects": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "data_bytes": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "bytes_used": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "bytes_avail": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "bytes_total": 0
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "fsmap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "epoch": 1,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "btime": "2026-03-01T09:40:52:961395+0000",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "by_rank": [],
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "up:standby": 0
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "mgrmap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "available": false,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "num_standbys": 0,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "modules": [
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:            "iostat",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:            "nfs",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:            "restful"
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        ],
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "services": {}
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "servicemap": {
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "epoch": 1,
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "modified": "2026-03-01T09:40:52.964363+0000",
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:        "services": {}
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    },
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]:    "progress_events": {}
Mar  1 04:41:02 np0005634532 competent_chatterjee[76292]: }
Mar  1 04:41:02 np0005634532 systemd[1]: libpod-9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7.scope: Deactivated successfully.
Mar  1 04:41:02 np0005634532 podman[76276]: 2026-03-01 09:41:02.283901779 +0000 UTC m=+0.409137128 container died 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0aacec851b6770dd27a8ae36196088cb3c9e992c1b237e726cd290c6fe5a01db-merged.mount: Deactivated successfully.
Mar  1 04:41:02 np0005634532 podman[76276]: 2026-03-01 09:41:02.322095455 +0000 UTC m=+0.447330804 container remove 9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7 (image=quay.io/ceph/ceph:v19, name=competent_chatterjee, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:41:02 np0005634532 systemd[1]: libpod-conmon-9493d50d626857d40ced4c7b4f38a6b7f00a052c99f338c7d589035308a8dfd7.scope: Deactivated successfully.
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.400+0000 7f5026b31140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.651+0000 7f5026b31140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.910+0000 7f5026b31140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:41:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:02.988+0000 7f5026b31140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:41:02 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x56219a4b09c0 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:41:02 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.ebwufc(active, starting, since 0.0140455s)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e1 all = 1
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Manager daemon compute-0.ebwufc is now available
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: crash
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer INFO root] Starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:41:03
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [balancer INFO root] No pools available
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: progress
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [progress INFO root] Loading...
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [progress INFO root] No stored events to load
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: restful
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: status
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"} v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0)
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0)
Mar  1 04:41:03 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: Manager daemon compute-0.ebwufc is now available
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:03 np0005634532 ceph-mon[75825]: from='mgr.14102 192.168.122.100:0/728708180' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:04 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.ebwufc(active, since 1.0254s)
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.389108196 +0000 UTC m=+0.048460001 container create 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:04 np0005634532 systemd[1]: Started libpod-conmon-3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba.scope.
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.363775559 +0000 UTC m=+0.023127414 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:04 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f3729c841aefae4c135b3d7f9cfd0c01aa65a5f5f4c749ef39a1afe6856211/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f3729c841aefae4c135b3d7f9cfd0c01aa65a5f5f4c749ef39a1afe6856211/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5f3729c841aefae4c135b3d7f9cfd0c01aa65a5f5f4c749ef39a1afe6856211/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.483734371 +0000 UTC m=+0.143086176 container init 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.487266298 +0000 UTC m=+0.146618093 container start 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.491159295 +0000 UTC m=+0.150511070 container attach 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:41:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Mar  1 04:41:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/768447378' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]: 
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]: {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "health": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "status": "HEALTH_OK",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "checks": {},
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "mutes": []
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "election_epoch": 5,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "quorum": [
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        0
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    ],
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "quorum_names": [
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "compute-0"
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    ],
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "quorum_age": 9,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "monmap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "epoch": 1,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "min_mon_release_name": "squid",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_mons": 1
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "osdmap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "epoch": 1,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_osds": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_up_osds": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "osd_up_since": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_in_osds": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "osd_in_since": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_remapped_pgs": 0
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "pgmap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "pgs_by_state": [],
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_pgs": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_pools": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_objects": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "data_bytes": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "bytes_used": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "bytes_avail": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "bytes_total": 0
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "fsmap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "epoch": 1,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "btime": "2026-03-01T09:40:52:961395+0000",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "by_rank": [],
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "up:standby": 0
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "mgrmap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "available": true,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "num_standbys": 0,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "modules": [
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:            "iostat",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:            "nfs",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:            "restful"
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        ],
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "services": {}
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "servicemap": {
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "epoch": 1,
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "modified": "2026-03-01T09:40:52.964363+0000",
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:        "services": {}
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    },
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]:    "progress_events": {}
Mar  1 04:41:04 np0005634532 eloquent_mestorf[76426]: }
Mar  1 04:41:04 np0005634532 systemd[1]: libpod-3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba.scope: Deactivated successfully.
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.923115447 +0000 UTC m=+0.582467242 container died 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:41:04 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f5f3729c841aefae4c135b3d7f9cfd0c01aa65a5f5f4c749ef39a1afe6856211-merged.mount: Deactivated successfully.
Mar  1 04:41:04 np0005634532 podman[76410]: 2026-03-01 09:41:04.955213172 +0000 UTC m=+0.614564947 container remove 3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba (image=quay.io/ceph/ceph:v19, name=eloquent_mestorf, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:04 np0005634532 systemd[1]: libpod-conmon-3bc70680afeef53961090fad82ab88ea1e702994a080a44387a4ac25a5607cba.scope: Deactivated successfully.
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.006698258 +0000 UTC m=+0.034838065 container create 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:05 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.ebwufc(active, since 2s)
Mar  1 04:41:05 np0005634532 systemd[1]: Started libpod-conmon-0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2.scope.
Mar  1 04:41:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b013e4244348ff28d8c6238b9f8b9104acd5632553a287671e33aa108fc9c8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b013e4244348ff28d8c6238b9f8b9104acd5632553a287671e33aa108fc9c8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b013e4244348ff28d8c6238b9f8b9104acd5632553a287671e33aa108fc9c8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9b013e4244348ff28d8c6238b9f8b9104acd5632553a287671e33aa108fc9c8/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.064588972 +0000 UTC m=+0.092728779 container init 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.070335694 +0000 UTC m=+0.098475501 container start 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.073573205 +0000 UTC m=+0.101713012 container attach 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:04.990187699 +0000 UTC m=+0.018327526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Mar  1 04:41:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3779944078' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:41:05 np0005634532 jolly_hugle[76481]: 
Mar  1 04:41:05 np0005634532 jolly_hugle[76481]: [global]
Mar  1 04:41:05 np0005634532 jolly_hugle[76481]: #011fsid = 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:41:05 np0005634532 jolly_hugle[76481]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Mar  1 04:41:05 np0005634532 systemd[1]: libpod-0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2.scope: Deactivated successfully.
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.409035636 +0000 UTC m=+0.437175443 container died 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 04:41:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e9b013e4244348ff28d8c6238b9f8b9104acd5632553a287671e33aa108fc9c8-merged.mount: Deactivated successfully.
Mar  1 04:41:05 np0005634532 podman[76464]: 2026-03-01 09:41:05.441057909 +0000 UTC m=+0.469197716 container remove 0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2 (image=quay.io/ceph/ceph:v19, name=jolly_hugle, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:41:05 np0005634532 systemd[1]: libpod-conmon-0c5d3c86dbc05595d5c8d6b428599a459c921823b01b3a105cc04209dda59eb2.scope: Deactivated successfully.
Mar  1 04:41:05 np0005634532 podman[76519]: 2026-03-01 09:41:05.505794513 +0000 UTC m=+0.048617385 container create 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:05 np0005634532 systemd[1]: Started libpod-conmon-1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c.scope.
Mar  1 04:41:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f2c62196c7261b9eb16c74ac7f4c9a06c0606ad1b218f7b36823071eca8023/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f2c62196c7261b9eb16c74ac7f4c9a06c0606ad1b218f7b36823071eca8023/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73f2c62196c7261b9eb16c74ac7f4c9a06c0606ad1b218f7b36823071eca8023/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:05 np0005634532 podman[76519]: 2026-03-01 09:41:05.482471395 +0000 UTC m=+0.025294277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:05 np0005634532 podman[76519]: 2026-03-01 09:41:05.587739833 +0000 UTC m=+0.130562725 container init 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:05 np0005634532 podman[76519]: 2026-03-01 09:41:05.594025779 +0000 UTC m=+0.136848631 container start 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:41:05 np0005634532 podman[76519]: 2026-03-01 09:41:05.596987392 +0000 UTC m=+0.139810275 container attach 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Mar  1 04:41:06 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3779944078' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:41:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0)
Mar  1 04:41:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4261469546' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:07 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4261469546' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Mar  1 04:41:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4261469546' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  e: '/usr/bin/ceph-mgr'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  0: '/usr/bin/ceph-mgr'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  1: '-n'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  2: 'mgr.compute-0.ebwufc'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  3: '-f'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  4: '--setuser'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  5: 'ceph'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  6: '--setgroup'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  7: 'ceph'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  8: '--default-log-to-file=false'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  9: '--default-log-to-journald=true'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  10: '--default-log-to-stderr=false'
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr respawn  exe_path /proc/self/exe
Mar  1 04:41:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.ebwufc(active, since 4s)
Mar  1 04:41:07 np0005634532 systemd[1]: libpod-1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c.scope: Deactivated successfully.
Mar  1 04:41:07 np0005634532 conmon[76535]: conmon 1c1b81acb0069d81e58f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c.scope/container/memory.events
Mar  1 04:41:07 np0005634532 podman[76519]: 2026-03-01 09:41:07.087647874 +0000 UTC m=+1.630470716 container died 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 04:41:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-73f2c62196c7261b9eb16c74ac7f4c9a06c0606ad1b218f7b36823071eca8023-merged.mount: Deactivated successfully.
Mar  1 04:41:07 np0005634532 podman[76519]: 2026-03-01 09:41:07.12786892 +0000 UTC m=+1.670691762 container remove 1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c (image=quay.io/ceph/ceph:v19, name=strange_lewin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:07 np0005634532 systemd[1]: libpod-conmon-1c1b81acb0069d81e58fdef832f4f5edfcfb6e8453e128f1ff568d5b6d025d4c.scope: Deactivated successfully.
Mar  1 04:41:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setuser ceph since I am not root
Mar  1 04:41:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setgroup ceph since I am not root
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.189939358 +0000 UTC m=+0.041948970 container create ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:07 np0005634532 systemd[1]: Started libpod-conmon-ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7.scope.
Mar  1 04:41:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adbddc8e33b382f3c27baf34d34ccfd0b4669531e539bcc70a09b9d59bbe3202/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adbddc8e33b382f3c27baf34d34ccfd0b4669531e539bcc70a09b9d59bbe3202/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/adbddc8e33b382f3c27baf34d34ccfd0b4669531e539bcc70a09b9d59bbe3202/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.168318552 +0000 UTC m=+0.020328164 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:41:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:07.277+0000 7f3139423140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.28486985 +0000 UTC m=+0.136879492 container init ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.290380317 +0000 UTC m=+0.142389919 container start ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.294131639 +0000 UTC m=+0.146141331 container attach ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:41:07 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:41:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:07.353+0000 7f3139423140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:41:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0)
Mar  1 04:41:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/571246937' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]: {
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]:    "epoch": 5,
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]:    "available": true,
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]:    "active_name": "compute-0.ebwufc",
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]:    "num_standby": 0
Mar  1 04:41:07 np0005634532 zen_ardinghelli[76609]: }
Mar  1 04:41:07 np0005634532 systemd[1]: libpod-ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7.scope: Deactivated successfully.
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.686495731 +0000 UTC m=+0.538505323 container died ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-adbddc8e33b382f3c27baf34d34ccfd0b4669531e539bcc70a09b9d59bbe3202-merged.mount: Deactivated successfully.
Mar  1 04:41:07 np0005634532 podman[76574]: 2026-03-01 09:41:07.728573443 +0000 UTC m=+0.580583065 container remove ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7 (image=quay.io/ceph/ceph:v19, name=zen_ardinghelli, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:07 np0005634532 systemd[1]: libpod-conmon-ad8e3db7dd3b2a58bc97bfe9262977f2e052300fb9a54725a4c827b80fd884b7.scope: Deactivated successfully.
Mar  1 04:41:07 np0005634532 podman[76659]: 2026-03-01 09:41:07.808110524 +0000 UTC m=+0.055861035 container create 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:41:07 np0005634532 systemd[1]: Started libpod-conmon-7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb.scope.
Mar  1 04:41:07 np0005634532 podman[76659]: 2026-03-01 09:41:07.782561231 +0000 UTC m=+0.030311772 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688195a8ff974b1cd9246a78929729a5ce9c25e75b4d42ba8db29ea4aa5210a1/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688195a8ff974b1cd9246a78929729a5ce9c25e75b4d42ba8db29ea4aa5210a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/688195a8ff974b1cd9246a78929729a5ce9c25e75b4d42ba8db29ea4aa5210a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:07 np0005634532 podman[76659]: 2026-03-01 09:41:07.908400559 +0000 UTC m=+0.156151040 container init 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:07 np0005634532 podman[76659]: 2026-03-01 09:41:07.913226248 +0000 UTC m=+0.160976729 container start 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Mar  1 04:41:07 np0005634532 podman[76659]: 2026-03-01 09:41:07.916398947 +0000 UTC m=+0.164149428 container attach 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:41:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4261469546' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:08.118+0000 7f3139423140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:08.751+0000 7f3139423140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:08.911+0000 7f3139423140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:41:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:08.973+0000 7f3139423140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:41:08 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:41:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:09.095+0000 7f3139423140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:41:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:09.952+0000 7f3139423140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:41:09 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.147+0000 7f3139423140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.214+0000 7f3139423140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.274+0000 7f3139423140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.342+0000 7f3139423140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.401+0000 7f3139423140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.688+0000 7f3139423140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:41:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:10.771+0000 7f3139423140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:41:10 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.137+0000 7f3139423140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.634+0000 7f3139423140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.701+0000 7f3139423140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.777+0000 7f3139423140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.919+0000 7f3139423140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:41:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:11.980+0000 7f3139423140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:41:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:41:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:12.116+0000 7f3139423140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:41:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:12.310+0000 7f3139423140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:41:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:12.550+0000 7f3139423140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:41:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:41:12.612+0000 7f3139423140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ebwufc restarted
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x56533938ad00 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.ebwufc(active, starting, since 0.0154302s)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e1 all = 1
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Starting
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Manager daemon compute-0.ebwufc is now available
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:41:12
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] No pools available
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: Active manager daemon compute-0.ebwufc restarted
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: Manager daemon compute-0.ebwufc is now available
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: crash
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Starting
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: progress
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [progress INFO root] Loading...
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [progress INFO root] No stored events to load
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded [] historic events
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: restful
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: status
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"} v 0)
Mar  1 04:41:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Mar  1 04:41:12 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.agent_endpoint_root_cert}] v 0)
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.agent_endpoint_key}] v 0)
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.ebwufc(active, since 1.02685s)
Mar  1 04:41:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Mar  1 04:41:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14126 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Mar  1 04:41:13 np0005634532 beautiful_galileo[76676]: {
Mar  1 04:41:13 np0005634532 beautiful_galileo[76676]:    "mgrmap_epoch": 7,
Mar  1 04:41:13 np0005634532 beautiful_galileo[76676]:    "initialized": true
Mar  1 04:41:13 np0005634532 beautiful_galileo[76676]: }
Mar  1 04:41:13 np0005634532 systemd[1]: libpod-7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb.scope: Deactivated successfully.
Mar  1 04:41:13 np0005634532 podman[76659]: 2026-03-01 09:41:13.676758343 +0000 UTC m=+5.924508864 container died 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: Found migration_current of "None". Setting to last migration.
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-688195a8ff974b1cd9246a78929729a5ce9c25e75b4d42ba8db29ea4aa5210a1-merged.mount: Deactivated successfully.
Mar  1 04:41:13 np0005634532 podman[76659]: 2026-03-01 09:41:13.723480041 +0000 UTC m=+5.971230542 container remove 7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb (image=quay.io/ceph/ceph:v19, name=beautiful_galileo, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:41:13 np0005634532 systemd[1]: libpod-conmon-7de181fa8045f1cca06f8e746ebf712bbbbf22b5e7a7dfed202f6e7f91b11fbb.scope: Deactivated successfully.
Mar  1 04:41:13 np0005634532 podman[76828]: 2026-03-01 09:41:13.804438117 +0000 UTC m=+0.053608220 container create 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:13 np0005634532 systemd[1]: Started libpod-conmon-46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b.scope.
Mar  1 04:41:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1eac43d1e38ef2e9d8485ab8804d558596ec696825792001a1affe0859dab0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1eac43d1e38ef2e9d8485ab8804d558596ec696825792001a1affe0859dab0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b1eac43d1e38ef2e9d8485ab8804d558596ec696825792001a1affe0859dab0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:13 np0005634532 podman[76828]: 2026-03-01 09:41:13.786676767 +0000 UTC m=+0.035846890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:13 np0005634532 podman[76828]: 2026-03-01 09:41:13.887981396 +0000 UTC m=+0.137151589 container init 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:41:13 np0005634532 podman[76828]: 2026-03-01 09:41:13.892482478 +0000 UTC m=+0.141652571 container start 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True)
Mar  1 04:41:13 np0005634532 podman[76828]: 2026-03-01 09:41:13.895716688 +0000 UTC m=+0.144886811 container attach 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14134 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0)
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:41:14 np0005634532 systemd[1]: libpod-46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b.scope: Deactivated successfully.
Mar  1 04:41:14 np0005634532 podman[76828]: 2026-03-01 09:41:14.295992715 +0000 UTC m=+0.545178079 container died 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:41:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3b1eac43d1e38ef2e9d8485ab8804d558596ec696825792001a1affe0859dab0-merged.mount: Deactivated successfully.
Mar  1 04:41:14 np0005634532 podman[76828]: 2026-03-01 09:41:14.32967209 +0000 UTC m=+0.578842183 container remove 46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b (image=quay.io/ceph/ceph:v19, name=romantic_visvesvaraya, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:14 np0005634532 systemd[1]: libpod-conmon-46493fbc3eff627c167c9f02c3bcea5f951891a72a435bb7cf1c8223a7aeea3b.scope: Deactivated successfully.
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.410679687 +0000 UTC m=+0.057033924 container create c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:41:14 np0005634532 systemd[1]: Started libpod-conmon-c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013.scope.
Mar  1 04:41:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f135bffd24d0cf54bbc2034f55b8904aa444fd1bbcf3ac33873b3673cbc347/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f135bffd24d0cf54bbc2034f55b8904aa444fd1bbcf3ac33873b3673cbc347/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5f135bffd24d0cf54bbc2034f55b8904aa444fd1bbcf3ac33873b3673cbc347/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.386932959 +0000 UTC m=+0.033287236 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.496603226 +0000 UTC m=+0.142957533 container init c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.502971393 +0000 UTC m=+0.149325630 container start c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.507603618 +0000 UTC m=+0.153957855 container attach c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:41:14] ENGINE Bus STARTING
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:41:14] ENGINE Bus STARTING
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:41:14] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:41:14] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:41:14] ENGINE Client ('192.168.122.100', 37066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:41:14] ENGINE Client ('192.168.122.100', 37066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:41:14] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:41:14] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:41:14] ENGINE Bus STARTED
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:41:14] ENGINE Bus STARTED
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0)
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_user
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0)
Mar  1 04:41:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_config
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Mar  1 04:41:14 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Mar  1 04:41:14 np0005634532 peaceful_lalande[76899]: ssh user set to ceph-admin. sudo will be used
Mar  1 04:41:14 np0005634532 systemd[1]: libpod-c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013.scope: Deactivated successfully.
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.911872314 +0000 UTC m=+0.558226551 container died c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:41:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d5f135bffd24d0cf54bbc2034f55b8904aa444fd1bbcf3ac33873b3673cbc347-merged.mount: Deactivated successfully.
Mar  1 04:41:14 np0005634532 podman[76880]: 2026-03-01 09:41:14.955067314 +0000 UTC m=+0.601421541 container remove c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013 (image=quay.io/ceph/ceph:v19, name=peaceful_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:14 np0005634532 systemd[1]: libpod-conmon-c7e59001e6d1c1ead30b3def65e96ed824278f17ebb78d33dc8c5264df4ec013.scope: Deactivated successfully.
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.013732738 +0000 UTC m=+0.039863149 container create 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:15 np0005634532 systemd[1]: Started libpod-conmon-7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b.scope.
Mar  1 04:41:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:14.992362098 +0000 UTC m=+0.018492499 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.104906467 +0000 UTC m=+0.131036858 container init 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019924717 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.114967226 +0000 UTC m=+0.141097637 container start 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.119603111 +0000 UTC m=+0.145733492 container attach 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.ebwufc(active, since 2s)
Mar  1 04:41:15 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14138 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0)
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_key
Mar  1 04:41:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Mar  1 04:41:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Set ssh private key
Mar  1 04:41:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh private key
Mar  1 04:41:15 np0005634532 systemd[1]: libpod-7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b.scope: Deactivated successfully.
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.471253553 +0000 UTC m=+0.497383974 container died 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:41:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-bc1acb488216aa08a29ce6da24567fc1732111235cae1f11f38665c9238bbba6-merged.mount: Deactivated successfully.
Mar  1 04:41:15 np0005634532 podman[76963]: 2026-03-01 09:41:15.515239743 +0000 UTC m=+0.541370154 container remove 7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b (image=quay.io/ceph/ceph:v19, name=jovial_colden, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:41:15 np0005634532 systemd[1]: libpod-conmon-7191d6f14e7afcb786a426a6ee85ac918c1f263c2f61c276cdbd820b9f13580b.scope: Deactivated successfully.
Mar  1 04:41:15 np0005634532 podman[77019]: 2026-03-01 09:41:15.595646975 +0000 UTC m=+0.059013323 container create 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:15 np0005634532 systemd[1]: Started libpod-conmon-38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219.scope.
Mar  1 04:41:15 np0005634532 podman[77019]: 2026-03-01 09:41:15.56879505 +0000 UTC m=+0.032161408 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:41:14] ENGINE Bus STARTING
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:41:14] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:41:14] ENGINE Client ('192.168.122.100', 37066) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:41:14] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:41:14] ENGINE Bus STARTED
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: Set ssh ssh_user
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: Set ssh ssh_config
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: ssh user set to ceph-admin. sudo will be used
Mar  1 04:41:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:15 np0005634532 podman[77019]: 2026-03-01 09:41:15.705925508 +0000 UTC m=+0.169291876 container init 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:41:15 np0005634532 podman[77019]: 2026-03-01 09:41:15.712127341 +0000 UTC m=+0.175493709 container start 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:15 np0005634532 podman[77019]: 2026-03-01 09:41:15.716472229 +0000 UTC m=+0.179838637 container attach 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:41:16 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14140 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0)
Mar  1 04:41:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:16 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Set ssh ssh_identity_pub
Mar  1 04:41:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Mar  1 04:41:16 np0005634532 systemd[1]: libpod-38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219.scope: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77019]: 2026-03-01 09:41:16.063311662 +0000 UTC m=+0.526678020 container died 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:16 np0005634532 systemd[1]: var-lib-containers-storage-overlay-79ad89e27ad2c403fbed30e5796cda8a1ccbd1c22daf94648bd0a59eb7c9f24b-merged.mount: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77019]: 2026-03-01 09:41:16.109763553 +0000 UTC m=+0.573129911 container remove 38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219 (image=quay.io/ceph/ceph:v19, name=fervent_lichterman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:16 np0005634532 systemd[1]: libpod-conmon-38429f61a378a73c6e3486dd1b6d5794b8319e13af824527d93f986377a39219.scope: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77073]: 2026-03-01 09:41:16.185746536 +0000 UTC m=+0.057134407 container create 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:41:16 np0005634532 systemd[1]: Started libpod-conmon-33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc.scope.
Mar  1 04:41:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e75fee6e80df4622a7fcdafee1491e16f9f02ff3642998a6a07f92e87967d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e75fee6e80df4622a7fcdafee1491e16f9f02ff3642998a6a07f92e87967d5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73e75fee6e80df4622a7fcdafee1491e16f9f02ff3642998a6a07f92e87967d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 podman[77073]: 2026-03-01 09:41:16.162262014 +0000 UTC m=+0.033649925 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:16 np0005634532 podman[77073]: 2026-03-01 09:41:16.276389712 +0000 UTC m=+0.147777623 container init 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:16 np0005634532 podman[77073]: 2026-03-01 09:41:16.283129688 +0000 UTC m=+0.154517559 container start 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:16 np0005634532 podman[77073]: 2026-03-01 09:41:16.286793339 +0000 UTC m=+0.158181200 container attach 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:16 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:16 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14142 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:16 np0005634532 serene_elgamal[77090]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4r/h9SObOHSuiw5A+WOZBta0fbyUAW9df409G5elVqx5xEkzHRyqc5swGO7jA52VpkG7z7sAhp/u9rz0WOpHL2scZmY+LpD0muh8PEWxCQmZtCeTVom8HkMdE0YZeUMhBZyDg8pzptPCs24b6izYHGUB9JlhXuJxRQaw8Vkfa+3qAkUvgjtAzkbi8hHv4FN1X7G7rcnQGFRaFOxFEajsYvOIrEC3Inx8o+759C6qP5HbI8EIbhQ0lqKD1m0DjP2uMvqiz+x3cfI4R5D9EjAuCNokPREK8GbbOMpifW7/wJpCZwN3TAzyCC82ISMYN5Fs4jaD59mgk0xOINCevEocRVXDE8ImclQJzMPMQSPLZJyQM/BxZCogUeFHgA3cX+7TA6J+jUtaKbDuP8iFBEajwgUS5NzwZpB/hwJIrwNfKK+DjI4L/R5XPMmvAKCuNT4SvQH3JnioEWQXNg9b3nCELg4dF4+IXxk6L8cMcKJdS5JWKClzE09IZ4qIP2RD6Ljs= zuul@controller
Mar  1 04:41:16 np0005634532 systemd[1]: libpod-33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc.scope: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77116]: 2026-03-01 09:41:16.732332778 +0000 UTC m=+0.017248038 container died 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:16 np0005634532 systemd[1]: var-lib-containers-storage-overlay-73e75fee6e80df4622a7fcdafee1491e16f9f02ff3642998a6a07f92e87967d5-merged.mount: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77116]: 2026-03-01 09:41:16.767859268 +0000 UTC m=+0.052774538 container remove 33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc (image=quay.io/ceph/ceph:v19, name=serene_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:16 np0005634532 systemd[1]: libpod-conmon-33f679b7d5fba00a023966e65f96da9ea4e0262771ae9f7bbbd4304559bf39bc.scope: Deactivated successfully.
Mar  1 04:41:16 np0005634532 podman[77131]: 2026-03-01 09:41:16.833885794 +0000 UTC m=+0.045462147 container create 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:16 np0005634532 systemd[1]: Started libpod-conmon-4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57.scope.
Mar  1 04:41:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab04ac57e9860f0ae269e74e13d5794a8f84c42a3bcc88d262a84f475ce588/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab04ac57e9860f0ae269e74e13d5794a8f84c42a3bcc88d262a84f475ce588/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3aab04ac57e9860f0ae269e74e13d5794a8f84c42a3bcc88d262a84f475ce588/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:16 np0005634532 podman[77131]: 2026-03-01 09:41:16.811361026 +0000 UTC m=+0.022937439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:16 np0005634532 podman[77131]: 2026-03-01 09:41:16.920152801 +0000 UTC m=+0.131729154 container init 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:41:16 np0005634532 podman[77131]: 2026-03-01 09:41:16.926718453 +0000 UTC m=+0.138294786 container start 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:16 np0005634532 podman[77131]: 2026-03-01 09:41:16.929731558 +0000 UTC m=+0.141307891 container attach 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:41:17 np0005634532 ceph-mon[75825]: Set ssh ssh_identity_key
Mar  1 04:41:17 np0005634532 ceph-mon[75825]: Set ssh private key
Mar  1 04:41:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:17 np0005634532 ceph-mon[75825]: Set ssh ssh_identity_pub
Mar  1 04:41:17 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:17 np0005634532 systemd[1]: Created slice User Slice of UID 42477.
Mar  1 04:41:17 np0005634532 systemd[1]: Starting User Runtime Directory /run/user/42477...
Mar  1 04:41:17 np0005634532 systemd-logind[832]: New session 21 of user ceph-admin.
Mar  1 04:41:17 np0005634532 systemd[1]: Finished User Runtime Directory /run/user/42477.
Mar  1 04:41:17 np0005634532 systemd[1]: Starting User Manager for UID 42477...
Mar  1 04:41:17 np0005634532 systemd[77178]: Queued start job for default target Main User Target.
Mar  1 04:41:17 np0005634532 systemd[77178]: Created slice User Application Slice.
Mar  1 04:41:17 np0005634532 systemd[77178]: Started Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:41:17 np0005634532 systemd[77178]: Started Daily Cleanup of User's Temporary Directories.
Mar  1 04:41:17 np0005634532 systemd[77178]: Reached target Paths.
Mar  1 04:41:17 np0005634532 systemd[77178]: Reached target Timers.
Mar  1 04:41:17 np0005634532 systemd[77178]: Starting D-Bus User Message Bus Socket...
Mar  1 04:41:17 np0005634532 systemd[77178]: Starting Create User's Volatile Files and Directories...
Mar  1 04:41:17 np0005634532 systemd[77178]: Finished Create User's Volatile Files and Directories.
Mar  1 04:41:17 np0005634532 systemd[77178]: Listening on D-Bus User Message Bus Socket.
Mar  1 04:41:17 np0005634532 systemd[77178]: Reached target Sockets.
Mar  1 04:41:17 np0005634532 systemd[77178]: Reached target Basic System.
Mar  1 04:41:17 np0005634532 systemd[77178]: Reached target Main User Target.
Mar  1 04:41:17 np0005634532 systemd[77178]: Startup finished in 99ms.
Mar  1 04:41:17 np0005634532 systemd[1]: Started User Manager for UID 42477.
Mar  1 04:41:17 np0005634532 systemd[1]: Started Session 21 of User ceph-admin.
Mar  1 04:41:17 np0005634532 systemd-logind[832]: New session 23 of user ceph-admin.
Mar  1 04:41:17 np0005634532 systemd[1]: Started Session 23 of User ceph-admin.
Mar  1 04:41:18 np0005634532 systemd-logind[832]: New session 24 of user ceph-admin.
Mar  1 04:41:18 np0005634532 systemd[1]: Started Session 24 of User ceph-admin.
Mar  1 04:41:18 np0005634532 systemd-logind[832]: New session 25 of user ceph-admin.
Mar  1 04:41:18 np0005634532 systemd[1]: Started Session 25 of User ceph-admin.
Mar  1 04:41:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Mar  1 04:41:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Mar  1 04:41:18 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:18 np0005634532 systemd-logind[832]: New session 26 of user ceph-admin.
Mar  1 04:41:18 np0005634532 systemd[1]: Started Session 26 of User ceph-admin.
Mar  1 04:41:19 np0005634532 systemd-logind[832]: New session 27 of user ceph-admin.
Mar  1 04:41:19 np0005634532 systemd[1]: Started Session 27 of User ceph-admin.
Mar  1 04:41:19 np0005634532 systemd-logind[832]: New session 28 of user ceph-admin.
Mar  1 04:41:19 np0005634532 systemd[1]: Started Session 28 of User ceph-admin.
Mar  1 04:41:19 np0005634532 systemd-logind[832]: New session 29 of user ceph-admin.
Mar  1 04:41:19 np0005634532 systemd[1]: Started Session 29 of User ceph-admin.
Mar  1 04:41:20 np0005634532 ceph-mon[75825]: Deploying cephadm binary to compute-0
Mar  1 04:41:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020053077 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:20 np0005634532 systemd-logind[832]: New session 30 of user ceph-admin.
Mar  1 04:41:20 np0005634532 systemd[1]: Started Session 30 of User ceph-admin.
Mar  1 04:41:20 np0005634532 systemd-logind[832]: New session 31 of user ceph-admin.
Mar  1 04:41:20 np0005634532 systemd[1]: Started Session 31 of User ceph-admin.
Mar  1 04:41:20 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:21 np0005634532 systemd-logind[832]: New session 32 of user ceph-admin.
Mar  1 04:41:21 np0005634532 systemd[1]: Started Session 32 of User ceph-admin.
Mar  1 04:41:22 np0005634532 systemd-logind[832]: New session 33 of user ceph-admin.
Mar  1 04:41:22 np0005634532 systemd[1]: Started Session 33 of User ceph-admin.
Mar  1 04:41:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:22 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Mar  1 04:41:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Mar  1 04:41:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:41:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:41:22 np0005634532 adoring_chebyshev[77148]: Added host 'compute-0' with addr '192.168.122.100'
Mar  1 04:41:22 np0005634532 systemd[1]: libpod-4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57.scope: Deactivated successfully.
Mar  1 04:41:22 np0005634532 podman[77131]: 2026-03-01 09:41:22.536320785 +0000 UTC m=+5.747897128 container died 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 04:41:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3aab04ac57e9860f0ae269e74e13d5794a8f84c42a3bcc88d262a84f475ce588-merged.mount: Deactivated successfully.
Mar  1 04:41:22 np0005634532 podman[77131]: 2026-03-01 09:41:22.586043397 +0000 UTC m=+5.797619750 container remove 4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57 (image=quay.io/ceph/ceph:v19, name=adoring_chebyshev, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:41:22 np0005634532 systemd[1]: libpod-conmon-4b65f98004ce124d051b08602c9a87b5df204fa11dae5ef36287114554b9ac57.scope: Deactivated successfully.
Mar  1 04:41:22 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:22 np0005634532 podman[77568]: 2026-03-01 09:41:22.658204905 +0000 UTC m=+0.047882328 container create f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:41:22 np0005634532 systemd[1]: Started libpod-conmon-f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7.scope.
Mar  1 04:41:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8a0f0092406683392d0104dfff62b6b0d6b4b6e89dc6a39a52522b1e9499b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8a0f0092406683392d0104dfff62b6b0d6b4b6e89dc6a39a52522b1e9499b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8c8a0f0092406683392d0104dfff62b6b0d6b4b6e89dc6a39a52522b1e9499b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:22 np0005634532 podman[77568]: 2026-03-01 09:41:22.640889736 +0000 UTC m=+0.030567179 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:22 np0005634532 podman[77568]: 2026-03-01 09:41:22.749900087 +0000 UTC m=+0.139577530 container init f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 04:41:22 np0005634532 podman[77568]: 2026-03-01 09:41:22.755794153 +0000 UTC m=+0.145471576 container start f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:41:22 np0005634532 podman[77568]: 2026-03-01 09:41:22.759249088 +0000 UTC m=+0.148926531 container attach f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement count:5
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:23 np0005634532 sleepy_liskov[77609]: Scheduled mon update...
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77568]: 2026-03-01 09:41:23.165966315 +0000 UTC m=+0.555643768 container died f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e8c8a0f0092406683392d0104dfff62b6b0d6b4b6e89dc6a39a52522b1e9499b-merged.mount: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77568]: 2026-03-01 09:41:23.205402252 +0000 UTC m=+0.595079685 container remove f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7 (image=quay.io/ceph/ceph:v19, name=sleepy_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-conmon-f1cc66485ba551fffba427f556fe6e9bff0036eaf079947905be8547cfc326d7.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77672]: 2026-03-01 09:41:23.278684738 +0000 UTC m=+0.055117737 container create ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:23 np0005634532 systemd[1]: Started libpod-conmon-ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384.scope.
Mar  1 04:41:23 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99322878398e916ebd813f2af72f4e30fd24bda01eed1a605114947545e37a5c/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99322878398e916ebd813f2af72f4e30fd24bda01eed1a605114947545e37a5c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99322878398e916ebd813f2af72f4e30fd24bda01eed1a605114947545e37a5c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 podman[77672]: 2026-03-01 09:41:23.253267018 +0000 UTC m=+0.029700077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:23 np0005634532 podman[77672]: 2026-03-01 09:41:23.371835816 +0000 UTC m=+0.148268795 container init ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:23 np0005634532 podman[77672]: 2026-03-01 09:41:23.37644571 +0000 UTC m=+0.152878679 container start ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:23 np0005634532 podman[77672]: 2026-03-01 09:41:23.380839279 +0000 UTC m=+0.157272248 container attach ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:41:23 np0005634532 podman[77627]: 2026-03-01 09:41:23.441576664 +0000 UTC m=+0.565964344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: Added host compute-0
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: Saving service mon spec with placement count:5
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.52859805 +0000 UTC m=+0.036145577 container create 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:23 np0005634532 systemd[1]: Started libpod-conmon-74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e.scope.
Mar  1 04:41:23 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.583447808 +0000 UTC m=+0.090995385 container init 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.590452652 +0000 UTC m=+0.098000219 container start 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.594298837 +0000 UTC m=+0.101846404 container attach 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.514242074 +0000 UTC m=+0.021789631 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:23 np0005634532 recursing_margulis[77740]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.681404055 +0000 UTC m=+0.188951612 container died 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement count:2
Mar  1 04:41:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:23 np0005634532 compassionate_thompson[77688]: Scheduled mgr update...
Mar  1 04:41:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-39287b10da8dd3ca07611b54e3c78784c475b2d50d6010b7c7ff861a3660ec00-merged.mount: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77724]: 2026-03-01 09:41:23.723916909 +0000 UTC m=+0.231464436 container remove 74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e (image=quay.io/ceph/ceph:v19, name=recursing_margulis, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True)
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-conmon-74a6fd1101001ba797db41bcc53ba9378c92aae87b1f3fcc1da71c0d42f2067e.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0)
Mar  1 04:41:23 np0005634532 podman[77759]: 2026-03-01 09:41:23.777272181 +0000 UTC m=+0.037184663 container died ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True)
Mar  1 04:41:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-99322878398e916ebd813f2af72f4e30fd24bda01eed1a605114947545e37a5c-merged.mount: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77759]: 2026-03-01 09:41:23.80670881 +0000 UTC m=+0.066621292 container remove ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384 (image=quay.io/ceph/ceph:v19, name=compassionate_thompson, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 04:41:23 np0005634532 systemd[1]: libpod-conmon-ec4f57fe958baa627c2feadb881a3ee671ad45c256bef5813cdb1483c4d8b384.scope: Deactivated successfully.
Mar  1 04:41:23 np0005634532 podman[77786]: 2026-03-01 09:41:23.861453086 +0000 UTC m=+0.036729141 container create 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:23 np0005634532 systemd[1]: Started libpod-conmon-8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb.scope.
Mar  1 04:41:23 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ded65d06c7d37c48a65660a44906eb4dc7adafbb54e9d3124673ecc3f51cc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ded65d06c7d37c48a65660a44906eb4dc7adafbb54e9d3124673ecc3f51cc9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68ded65d06c7d37c48a65660a44906eb4dc7adafbb54e9d3124673ecc3f51cc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:23 np0005634532 podman[77786]: 2026-03-01 09:41:23.845089511 +0000 UTC m=+0.020365556 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:23 np0005634532 podman[77786]: 2026-03-01 09:41:23.944096274 +0000 UTC m=+0.119372319 container init 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:41:23 np0005634532 podman[77786]: 2026-03-01 09:41:23.948557934 +0000 UTC m=+0.123833989 container start 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 04:41:23 np0005634532 podman[77786]: 2026-03-01 09:41:23.952673696 +0000 UTC m=+0.127949761 container attach 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:24 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service crash spec with placement *
Mar  1 04:41:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 elegant_pare[77837]: Scheduled crash update...
Mar  1 04:41:24 np0005634532 systemd[1]: libpod-8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb.scope: Deactivated successfully.
Mar  1 04:41:24 np0005634532 podman[77935]: 2026-03-01 09:41:24.359782142 +0000 UTC m=+0.024481828 container died 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-68ded65d06c7d37c48a65660a44906eb4dc7adafbb54e9d3124673ecc3f51cc9-merged.mount: Deactivated successfully.
Mar  1 04:41:24 np0005634532 podman[77935]: 2026-03-01 09:41:24.400637944 +0000 UTC m=+0.065337600 container remove 8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb (image=quay.io/ceph/ceph:v19, name=elegant_pare, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:41:24 np0005634532 systemd[1]: libpod-conmon-8e6d999cbf2eeea0ed6db194988d4a231ee9948fc81e84a3a817a9bf07516cdb.scope: Deactivated successfully.
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.461846891 +0000 UTC m=+0.041376116 container create 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:24 np0005634532 systemd[1]: Started libpod-conmon-34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9.scope.
Mar  1 04:41:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9083622eced6ad6fd7f78eb0623fd3db19281cfdf84e3e7f1add073ff656ca77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9083622eced6ad6fd7f78eb0623fd3db19281cfdf84e3e7f1add073ff656ca77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9083622eced6ad6fd7f78eb0623fd3db19281cfdf84e3e7f1add073ff656ca77/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.444172353 +0000 UTC m=+0.023701578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.541299639 +0000 UTC m=+0.120828904 container init 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.548407945 +0000 UTC m=+0.127937170 container start 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.551666096 +0000 UTC m=+0.131195351 container attach 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:41:24 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: Saving service mgr spec with placement count:2
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:24 np0005634532 podman[78060]: 2026-03-01 09:41:24.839364284 +0000 UTC m=+0.058324156 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0)
Mar  1 04:41:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3126774340' entity='client.admin' 
Mar  1 04:41:24 np0005634532 podman[78060]: 2026-03-01 09:41:24.952780964 +0000 UTC m=+0.171740826 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:41:24 np0005634532 systemd[1]: libpod-34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9.scope: Deactivated successfully.
Mar  1 04:41:24 np0005634532 podman[77950]: 2026-03-01 09:41:24.965471098 +0000 UTC m=+0.545000353 container died 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:41:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9083622eced6ad6fd7f78eb0623fd3db19281cfdf84e3e7f1add073ff656ca77-merged.mount: Deactivated successfully.
Mar  1 04:41:25 np0005634532 podman[77950]: 2026-03-01 09:41:25.002133197 +0000 UTC m=+0.581662432 container remove 34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9 (image=quay.io/ceph/ceph:v19, name=practical_spence, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:41:25 np0005634532 systemd[1]: libpod-conmon-34c417fdc194201b732782237c535e1378b6c74efe0065d86a47cbeabe32b8e9.scope: Deactivated successfully.
Mar  1 04:41:25 np0005634532 podman[78116]: 2026-03-01 09:41:25.078125289 +0000 UTC m=+0.057963357 container create ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054710 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:25 np0005634532 systemd[1]: Started libpod-conmon-ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2.scope.
Mar  1 04:41:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812943c2eecb7ad26ab6719767ecd14ae1699cd1be34dfad3984f5782af7f3a3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812943c2eecb7ad26ab6719767ecd14ae1699cd1be34dfad3984f5782af7f3a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/812943c2eecb7ad26ab6719767ecd14ae1699cd1be34dfad3984f5782af7f3a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 podman[78116]: 2026-03-01 09:41:25.059359025 +0000 UTC m=+0.039197113 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:25 np0005634532 podman[78116]: 2026-03-01 09:41:25.164242893 +0000 UTC m=+0.144080961 container init ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 04:41:25 np0005634532 podman[78116]: 2026-03-01 09:41:25.171513993 +0000 UTC m=+0.151352061 container start ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:25 np0005634532 podman[78116]: 2026-03-01 09:41:25.175128123 +0000 UTC m=+0.154966191 container attach ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:41:25 np0005634532 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 78231 (sysctl)
Mar  1 04:41:25 np0005634532 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Mar  1 04:41:25 np0005634532 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Mar  1 04:41:25 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0)
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:25 np0005634532 systemd[1]: libpod-ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2.scope: Deactivated successfully.
Mar  1 04:41:25 np0005634532 podman[78251]: 2026-03-01 09:41:25.607259749 +0000 UTC m=+0.028656861 container died ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:41:25 np0005634532 systemd[1]: var-lib-containers-storage-overlay-812943c2eecb7ad26ab6719767ecd14ae1699cd1be34dfad3984f5782af7f3a3-merged.mount: Deactivated successfully.
Mar  1 04:41:25 np0005634532 podman[78251]: 2026-03-01 09:41:25.634283869 +0000 UTC m=+0.055680981 container remove ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2 (image=quay.io/ceph/ceph:v19, name=reverent_varahamihira, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:41:25 np0005634532 systemd[1]: libpod-conmon-ecf59478fb1cc73e48aa1d7339e73e0dbd75b9f8b867f7b470db48a1b9fe78d2.scope: Deactivated successfully.
Mar  1 04:41:25 np0005634532 podman[78283]: 2026-03-01 09:41:25.693121297 +0000 UTC m=+0.040476534 container create 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:41:25 np0005634532 systemd[1]: Started libpod-conmon-49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315.scope.
Mar  1 04:41:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7ff94313c34de972fc051163b3fcf1f558ee5895f30802e8953f7a55044ee5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7ff94313c34de972fc051163b3fcf1f558ee5895f30802e8953f7a55044ee5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7ff94313c34de972fc051163b3fcf1f558ee5895f30802e8953f7a55044ee5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:25 np0005634532 podman[78283]: 2026-03-01 09:41:25.677625253 +0000 UTC m=+0.024980570 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:25 np0005634532 podman[78283]: 2026-03-01 09:41:25.778928862 +0000 UTC m=+0.126284129 container init 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:41:25 np0005634532 podman[78283]: 2026-03-01 09:41:25.78609408 +0000 UTC m=+0.133449317 container start 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:25 np0005634532 podman[78283]: 2026-03-01 09:41:25.790283134 +0000 UTC m=+0.137638381 container attach 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: Saving service crash spec with placement *
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3126774340' entity='client.admin' 
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:26 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:26 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Added label _admin to host compute-0
Mar  1 04:41:26 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Mar  1 04:41:26 np0005634532 silly_antonelli[78335]: Added label _admin to host compute-0
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315.scope: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78283]: 2026-03-01 09:41:26.191392912 +0000 UTC m=+0.538748189 container died 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ae7ff94313c34de972fc051163b3fcf1f558ee5895f30802e8953f7a55044ee5-merged.mount: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78283]: 2026-03-01 09:41:26.233149836 +0000 UTC m=+0.580505103 container remove 49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315 (image=quay.io/ceph/ceph:v19, name=silly_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-conmon-49704f7a8fce6eb2bd125a5d702e1a8c64f153d3bb5c339e3c56c6a89f039315.scope: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.332844436 +0000 UTC m=+0.072228020 container create 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:26 np0005634532 systemd[1]: Started libpod-conmon-264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd.scope.
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.305332515 +0000 UTC m=+0.044716119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:26 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f26f4606eb262b56b5e0a6cb3c87fbd7cc765157bdc24c2c516ba62e00417f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f26f4606eb262b56b5e0a6cb3c87fbd7cc765157bdc24c2c516ba62e00417f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98f26f4606eb262b56b5e0a6cb3c87fbd7cc765157bdc24c2c516ba62e00417f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.41976579 +0000 UTC m=+0.159149354 container init 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.426730172 +0000 UTC m=+0.166113746 container start 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.430391393 +0000 UTC m=+0.169774967 container attach 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.554913938 +0000 UTC m=+0.046677987 container create e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:26 np0005634532 systemd[1]: Started libpod-conmon-e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd.scope.
Mar  1 04:41:26 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.62398929 +0000 UTC m=+0.115753379 container init e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.628621124 +0000 UTC m=+0.120385183 container start e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:41:26 np0005634532 gallant_buck[78533]: 167 167
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd.scope: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.634798107 +0000 UTC m=+0.126562236 container attach e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.636590552 +0000 UTC m=+0.128354621 container died e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.539854505 +0000 UTC m=+0.031618574 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:41:26 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a4ec1dd3b8e1d45231d86fba0c2bf31d7eace1ccc497a267c0f2d94128000c5b-merged.mount: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78500]: 2026-03-01 09:41:26.675975538 +0000 UTC m=+0.167739597 container remove e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-conmon-e76633a4ab60acaa91e6056c671c44e2ee8fc53b7539d0e8c80c31c8fb5f79cd.scope: Deactivated successfully.
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0)
Mar  1 04:41:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3612194953' entity='client.admin' 
Mar  1 04:41:26 np0005634532 strange_heisenberg[78471]: set mgr/dashboard/cluster/status
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd.scope: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.871975984 +0000 UTC m=+0.611359578 container died 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:41:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-98f26f4606eb262b56b5e0a6cb3c87fbd7cc765157bdc24c2c516ba62e00417f-merged.mount: Deactivated successfully.
Mar  1 04:41:26 np0005634532 podman[78442]: 2026-03-01 09:41:26.910256522 +0000 UTC m=+0.649640076 container remove 264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd (image=quay.io/ceph/ceph:v19, name=strange_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:41:26 np0005634532 systemd[1]: libpod-conmon-264e1b738d1f4e870dc29148dabf9b61ec2041b2fc22677c5209205e2447acbd.scope: Deactivated successfully.
Mar  1 04:41:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:27 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3612194953' entity='client.admin' 
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.053100341 +0000 UTC m=+0.040859693 container create c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:27 np0005634532 systemd[1]: Started libpod-conmon-c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906.scope.
Mar  1 04:41:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f1d9265d8d887f7d2bafec40b656f2f67389adab60534e3c21c59892f1054c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f1d9265d8d887f7d2bafec40b656f2f67389adab60534e3c21c59892f1054c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f1d9265d8d887f7d2bafec40b656f2f67389adab60534e3c21c59892f1054c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68f1d9265d8d887f7d2bafec40b656f2f67389adab60534e3c21c59892f1054c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.03527437 +0000 UTC m=+0.023033702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.148423113 +0000 UTC m=+0.136182465 container init c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.161298952 +0000 UTC m=+0.149058304 container start c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.165566528 +0000 UTC m=+0.153325880 container attach c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:27 np0005634532 python3[78624]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:27 np0005634532 podman[78638]: 2026-03-01 09:41:27.620181661 +0000 UTC m=+0.054607234 container create 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:41:27 np0005634532 systemd[1]: Started libpod-conmon-3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7.scope.
Mar  1 04:41:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65526b8252f6f8f6ea43de995779460e8df32cf3d5c8ee01dc37d9e8f311f424/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65526b8252f6f8f6ea43de995779460e8df32cf3d5c8ee01dc37d9e8f311f424/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:27 np0005634532 podman[78638]: 2026-03-01 09:41:27.601285603 +0000 UTC m=+0.035711256 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:27 np0005634532 podman[78638]: 2026-03-01 09:41:27.703959926 +0000 UTC m=+0.138385579 container init 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:27 np0005634532 podman[78638]: 2026-03-01 09:41:27.710949759 +0000 UTC m=+0.145375342 container start 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Mar  1 04:41:27 np0005634532 podman[78638]: 2026-03-01 09:41:27.715137773 +0000 UTC m=+0.149563376 container attach 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]: [
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:    {
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "available": false,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "being_replaced": false,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "ceph_device_lvm": false,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "device_id": "QEMU_DVD-ROM_QM00001",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "lsm_data": {},
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "lvs": [],
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "path": "/dev/sr0",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "rejected_reasons": [
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "Has a FileSystem",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "Insufficient space (<5GB)"
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        ],
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        "sys_api": {
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "actuators": null,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "device_nodes": [
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:                "sr0"
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            ],
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "devname": "sr0",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "human_readable_size": "482.00 KB",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "id_bus": "ata",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "model": "QEMU DVD-ROM",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "nr_requests": "2",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "parent": "/dev/sr0",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "partitions": {},
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "path": "/dev/sr0",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "removable": "1",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "rev": "2.5+",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "ro": "0",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "rotational": "1",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "sas_address": "",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "sas_device_handle": "",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "scheduler_mode": "mq-deadline",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "sectors": 0,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "sectorsize": "2048",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "size": 493568.0,
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "support_discard": "2048",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "type": "disk",
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:            "vendor": "QEMU"
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:        }
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]:    }
Mar  1 04:41:27 np0005634532 condescending_goldstine[78591]: ]
Mar  1 04:41:27 np0005634532 systemd[1]: libpod-c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906.scope: Deactivated successfully.
Mar  1 04:41:27 np0005634532 conmon[78591]: conmon c604246f292de1dcaf21 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906.scope/container/memory.events
Mar  1 04:41:27 np0005634532 podman[78574]: 2026-03-01 09:41:27.960121463 +0000 UTC m=+0.947880805 container died c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-68f1d9265d8d887f7d2bafec40b656f2f67389adab60534e3c21c59892f1054c-merged.mount: Deactivated successfully.
Mar  1 04:41:28 np0005634532 podman[78574]: 2026-03-01 09:41:28.008524422 +0000 UTC m=+0.996283774 container remove c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_goldstine, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:28 np0005634532 systemd[1]: libpod-conmon-c604246f292de1dcaf215ff69e23481fccdbd58684a977f6426f8df9a1e40906.scope: Deactivated successfully.
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: Added label _admin to host compute-0
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:28 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:41:28 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0)
Mar  1 04:41:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/914218683' entity='client.admin' 
Mar  1 04:41:28 np0005634532 systemd[1]: libpod-3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7.scope: Deactivated successfully.
Mar  1 04:41:28 np0005634532 podman[78638]: 2026-03-01 09:41:28.177801981 +0000 UTC m=+0.612227554 container died 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:41:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-65526b8252f6f8f6ea43de995779460e8df32cf3d5c8ee01dc37d9e8f311f424-merged.mount: Deactivated successfully.
Mar  1 04:41:28 np0005634532 podman[78638]: 2026-03-01 09:41:28.213383029 +0000 UTC m=+0.647808602 container remove 3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7 (image=quay.io/ceph/ceph:v19, name=frosty_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 04:41:28 np0005634532 systemd[1]: libpod-conmon-3a09791ac007f161640f3d18c883130b5ed3b42aa4c7341ab1783679a0f8d4e7.scope: Deactivated successfully.
Mar  1 04:41:28 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:28 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:41:28 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:41:29 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/914218683' entity='client.admin' 
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80350]: Invoked with j703835770029 30 /home/zuul/.ansible/tmp/ansible-tmp-1772358088.6434085-37838-150524086635770/AnsiballZ_command.py _
Mar  1 04:41:29 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:41:29 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80431]: Starting module and watcher
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80431]: Start watching 80432 (30)
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80432]: Start module (80432)
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80350]: Return async_wrapper task started.
Mar  1 04:41:29 np0005634532 python3[80435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.449074735 +0000 UTC m=+0.045622300 container create 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:41:29 np0005634532 systemd[1]: Started libpod-conmon-3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb.scope.
Mar  1 04:41:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae589dafa6ba63c049b9970f7dcc02ce7742f9f4f9bff1759806864a018bb5ff/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae589dafa6ba63c049b9970f7dcc02ce7742f9f4f9bff1759806864a018bb5ff/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.431976122 +0000 UTC m=+0.028523707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.538993884 +0000 UTC m=+0.135541479 container init 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.544827838 +0000 UTC m=+0.141375413 container start 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.550496848 +0000 UTC m=+0.147044413 container attach 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:41:29 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:41:29 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:41:29 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14162 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:41:29 np0005634532 crazy_kowalevski[80570]: 
Mar  1 04:41:29 np0005634532 crazy_kowalevski[80570]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Mar  1 04:41:29 np0005634532 systemd[1]: libpod-3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb.scope: Deactivated successfully.
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.909968257 +0000 UTC m=+0.506515812 container died 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 04:41:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ae589dafa6ba63c049b9970f7dcc02ce7742f9f4f9bff1759806864a018bb5ff-merged.mount: Deactivated successfully.
Mar  1 04:41:29 np0005634532 podman[80506]: 2026-03-01 09:41:29.946310043 +0000 UTC m=+0.542857598 container remove 3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb (image=quay.io/ceph/ceph:v19, name=crazy_kowalevski, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:29 np0005634532 systemd[1]: libpod-conmon-3201228b443c80b7f612c2a85edd5c8248d7bcabbb58c047d36cb46954d9d8eb.scope: Deactivated successfully.
Mar  1 04:41:29 np0005634532 ansible-async_wrapper.py[80432]: Module complete (80432)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:30 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev e77affb7-70ec-41d6-8587-30d1ae270c97 (Updating crash deployment (+1 -> 1))
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:30 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Mar  1 04:41:30 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Mar  1 04:41:30 np0005634532 ceph-mgr[76134]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.691418614 +0000 UTC m=+0.045569799 container create 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:30 np0005634532 python3[81047]: ansible-ansible.legacy.async_status Invoked with jid=j703835770029.80350 mode=status _async_dir=/root/.ansible_async
Mar  1 04:41:30 np0005634532 systemd[1]: Started libpod-conmon-6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368.scope.
Mar  1 04:41:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.668389874 +0000 UTC m=+0.022541049 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.775693813 +0000 UTC m=+0.129845058 container init 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.781566078 +0000 UTC m=+0.135717223 container start 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.784938495 +0000 UTC m=+0.139089650 container attach 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:41:30 np0005634532 focused_antonelli[81096]: 167 167
Mar  1 04:41:30 np0005634532 systemd[1]: libpod-6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368.scope: Deactivated successfully.
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.787592086 +0000 UTC m=+0.141743241 container died 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:41:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-79f7aa6fe2f7bf38d7605a00c665c9860cc3b5d2386b1c18ca119ccfe3f206d9-merged.mount: Deactivated successfully.
Mar  1 04:41:30 np0005634532 podman[81072]: 2026-03-01 09:41:30.825246173 +0000 UTC m=+0.179397328 container remove 6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:30 np0005634532 systemd[1]: libpod-conmon-6a45abe47f58f81e7d2a6569b4e2005a9b0c3421d4d088770a670dc1ccb72368.scope: Deactivated successfully.
Mar  1 04:41:30 np0005634532 systemd[1]: Reloading.
Mar  1 04:41:30 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:41:30 np0005634532 python3[81152]: ansible-ansible.legacy.async_status Invoked with jid=j703835770029.80350 mode=cleanup _async_dir=/root/.ansible_async
Mar  1 04:41:31 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:41:31 np0005634532 systemd[1]: Reloading.
Mar  1 04:41:31 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:41:31 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:41:31 np0005634532 systemd[1]: Starting Ceph crash.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:41:31 np0005634532 python3[81271]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Mar  1 04:41:31 np0005634532 podman[81324]: 2026-03-01 09:41:31.631448278 +0000 UTC m=+0.047994456 container create 98fa546a64cdb4e5fdf4a6bff6b08f5201a7c8fb63cc9bce7dddf7bf76bea34b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:31 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68731b0f29b08325a847e7c6c0e515106d35e62a0564d44d3ae32f415216926/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:31 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68731b0f29b08325a847e7c6c0e515106d35e62a0564d44d3ae32f415216926/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:31 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68731b0f29b08325a847e7c6c0e515106d35e62a0564d44d3ae32f415216926/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:31 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68731b0f29b08325a847e7c6c0e515106d35e62a0564d44d3ae32f415216926/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:31 np0005634532 podman[81324]: 2026-03-01 09:41:31.689136045 +0000 UTC m=+0.105682243 container init 98fa546a64cdb4e5fdf4a6bff6b08f5201a7c8fb63cc9bce7dddf7bf76bea34b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:31 np0005634532 podman[81324]: 2026-03-01 09:41:31.69417019 +0000 UTC m=+0.110716398 container start 98fa546a64cdb4e5fdf4a6bff6b08f5201a7c8fb63cc9bce7dddf7bf76bea34b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:41:31 np0005634532 bash[81324]: 98fa546a64cdb4e5fdf4a6bff6b08f5201a7c8fb63cc9bce7dddf7bf76bea34b
Mar  1 04:41:31 np0005634532 podman[81324]: 2026-03-01 09:41:31.607376334 +0000 UTC m=+0.023922592 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:41:31 np0005634532 systemd[1]: Started Ceph crash.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: INFO:ceph-crash:pinging cluster to exercise our key
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev e77affb7-70ec-41d6-8587-30d1ae270c97 (Updating crash deployment (+1 -> 1))
Mar  1 04:41:31 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event e77affb7-70ec-41d6-8587-30d1ae270c97 (Updating crash deployment (+1 -> 1)) in 2 seconds
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:41:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.842+0000 7f7b07d0f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.842+0000 7f7b07d0f640 -1 AuthRegistry(0x7f7b000698f0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.843+0000 7f7b07d0f640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.843+0000 7f7b07d0f640 -1 AuthRegistry(0x7f7b07d0dff0) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.843+0000 7f7b05a84640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: 2026-03-01T09:41:31.843+0000 7f7b07d0f640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: [errno 13] RADOS permission denied (error connecting to the cluster)
Mar  1 04:41:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Mar  1 04:41:31 np0005634532 python3[81405]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.034937339 +0000 UTC m=+0.044833432 container create 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:41:32 np0005634532 systemd[1]: Started libpod-conmon-66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264.scope.
Mar  1 04:41:32 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.015350149 +0000 UTC m=+0.025246262 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b1e629b88328146da698e4fb8de9b1e83f8e53a2a5e60747f0804f1dd00bc6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b1e629b88328146da698e4fb8de9b1e83f8e53a2a5e60747f0804f1dd00bc6/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/37b1e629b88328146da698e4fb8de9b1e83f8e53a2a5e60747f0804f1dd00bc6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.127885578 +0000 UTC m=+0.137781721 container init 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.136199159 +0000 UTC m=+0.146095252 container start 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.139699969 +0000 UTC m=+0.149596092 container attach 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: Deploying daemon crash.compute-0 on compute-0
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 podman[81567]: 2026-03-01 09:41:32.400408027 +0000 UTC m=+0.049001299 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:32 np0005634532 podman[81567]: 2026-03-01 09:41:32.510308165 +0000 UTC m=+0.158901477 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:41:32 np0005634532 vigorous_davinci[81473]: 
Mar  1 04:41:32 np0005634532 vigorous_davinci[81473]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Mar  1 04:41:32 np0005634532 systemd[1]: libpod-66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264.scope: Deactivated successfully.
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.53442486 +0000 UTC m=+0.544320953 container died 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:41:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay-37b1e629b88328146da698e4fb8de9b1e83f8e53a2a5e60747f0804f1dd00bc6-merged.mount: Deactivated successfully.
Mar  1 04:41:32 np0005634532 podman[81458]: 2026-03-01 09:41:32.568965614 +0000 UTC m=+0.578861707 container remove 66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264 (image=quay.io/ceph/ceph:v19, name=vigorous_davinci, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:32 np0005634532 systemd[1]: libpod-conmon-66e75245f10c42bf37315080e88703f16cf6a17bc214ec619fb68a40ddb22264.scope: Deactivated successfully.
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 1 completed events
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:41:32 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:41:33 np0005634532 python3[81725]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:33 np0005634532 podman[81752]: 2026-03-01 09:41:33.099591951 +0000 UTC m=+0.038329943 container create c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:41:33 np0005634532 systemd[1]: Started libpod-conmon-c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727.scope.
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:41:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33dbafed666672e73c3fea8f724e0b0eccd29d1088fbbaa85e749eb4d60f9188/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33dbafed666672e73c3fea8f724e0b0eccd29d1088fbbaa85e749eb4d60f9188/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33dbafed666672e73c3fea8f724e0b0eccd29d1088fbbaa85e749eb4d60f9188/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:33 np0005634532 podman[81752]: 2026-03-01 09:41:33.176287075 +0000 UTC m=+0.115025097 container init c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:41:33 np0005634532 podman[81752]: 2026-03-01 09:41:33.081923624 +0000 UTC m=+0.020661626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:33 np0005634532 podman[81752]: 2026-03-01 09:41:33.18085747 +0000 UTC m=+0.119595462 container start c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:33 np0005634532 podman[81752]: 2026-03-01 09:41:33.185426435 +0000 UTC m=+0.124164517 container attach c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.281866454 +0000 UTC m=+0.053965883 container create 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:41:33 np0005634532 systemd[1]: Started libpod-conmon-3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c.scope.
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.249152311 +0000 UTC m=+0.021251790 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.36950501 +0000 UTC m=+0.141604409 container init 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.376361578 +0000 UTC m=+0.148461007 container start 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:33 np0005634532 systemd[1]: libpod-3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c.scope: Deactivated successfully.
Mar  1 04:41:33 np0005634532 mystifying_moore[81822]: 167 167
Mar  1 04:41:33 np0005634532 conmon[81822]: conmon 3c1d9c5e6eff3f7a3c5b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c.scope/container/memory.events
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.381757852 +0000 UTC m=+0.153857251 container attach 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.382362526 +0000 UTC m=+0.154461945 container died 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 04:41:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-15f27dd06d2b46a0db2cc9e696fe5f3370e30679a68cfc10ad93553879cef59c-merged.mount: Deactivated successfully.
Mar  1 04:41:33 np0005634532 podman[81787]: 2026-03-01 09:41:33.426668215 +0000 UTC m=+0.198767644 container remove 3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c (image=quay.io/ceph/ceph:v19, name=mystifying_moore, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 04:41:33 np0005634532 systemd[1]: libpod-conmon-3c1d9c5e6eff3f7a3c5be96685a94a71ee2847788ad5b1dbdb7459c63275647c.scope: Deactivated successfully.
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ebwufc (unknown last config time)...
Mar  1 04:41:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ebwufc (unknown last config time)...
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:41:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0)
Mar  1 04:41:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/561192617' entity='client.admin' 
Mar  1 04:41:33 np0005634532 systemd[1]: libpod-c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727.scope: Deactivated successfully.
Mar  1 04:41:33 np0005634532 podman[81867]: 2026-03-01 09:41:33.604078446 +0000 UTC m=+0.022382466 container died c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-33dbafed666672e73c3fea8f724e0b0eccd29d1088fbbaa85e749eb4d60f9188-merged.mount: Deactivated successfully.
Mar  1 04:41:33 np0005634532 podman[81867]: 2026-03-01 09:41:33.638679022 +0000 UTC m=+0.056983012 container remove c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727 (image=quay.io/ceph/ceph:v19, name=dreamy_hamilton, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:33 np0005634532 systemd[1]: libpod-conmon-c4db47c0b0e7a0520194c5200460f1255930b5a4ea341bc801b2b6a756434727.scope: Deactivated successfully.
Mar  1 04:41:33 np0005634532 podman[81949]: 2026-03-01 09:41:33.91722255 +0000 UTC m=+0.034286470 container create 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:41:33 np0005634532 python3[81931]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:33 np0005634532 systemd[1]: Started libpod-conmon-545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62.scope.
Mar  1 04:41:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:33 np0005634532 podman[81949]: 2026-03-01 09:41:33.997935406 +0000 UTC m=+0.114999366 container init 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:33 np0005634532 podman[81949]: 2026-03-01 09:41:33.901819535 +0000 UTC m=+0.018883485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:34 np0005634532 podman[81949]: 2026-03-01 09:41:34.006964954 +0000 UTC m=+0.124028874 container start 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:41:34 np0005634532 festive_beaver[81966]: 167 167
Mar  1 04:41:34 np0005634532 systemd[1]: libpod-545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62.scope: Deactivated successfully.
Mar  1 04:41:34 np0005634532 podman[81949]: 2026-03-01 09:41:34.010840623 +0000 UTC m=+0.127904633 container attach 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:41:34 np0005634532 podman[81949]: 2026-03-01 09:41:34.011247643 +0000 UTC m=+0.128311613 container died 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.038707524 +0000 UTC m=+0.067969374 container create 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:41:34 np0005634532 podman[81949]: 2026-03-01 09:41:34.051652192 +0000 UTC m=+0.168716122 container remove 545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62 (image=quay.io/ceph/ceph:v19, name=festive_beaver, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:34 np0005634532 systemd[1]: libpod-conmon-545f025637a813dd81f0216ceb489db42b0ba42ae9a4d61c15fd1777d2b37e62.scope: Deactivated successfully.
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:41:34 np0005634532 systemd[1]: Started libpod-conmon-1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f.scope.
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.013483294 +0000 UTC m=+0.042745184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c3c2a79ae502853570f3d95ecd60c98502857af698fb552296988ba92213280d-merged.mount: Deactivated successfully.
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:41:34 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444303a2bc6ee604d446cb00b449925bd7c61332508e2f225dd6dafee91aaf2a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444303a2bc6ee604d446cb00b449925bd7c61332508e2f225dd6dafee91aaf2a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444303a2bc6ee604d446cb00b449925bd7c61332508e2f225dd6dafee91aaf2a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.141449938 +0000 UTC m=+0.170711798 container init 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.145812658 +0000 UTC m=+0.175074488 container start 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.15025869 +0000 UTC m=+0.179520510 container attach 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: Reconfiguring mon.compute-0 (unknown last config time)...
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/561192617' entity='client.admin' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 ansible-async_wrapper.py[80431]: Done in kid B.
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2562191376' entity='client.admin' 
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:41:34 np0005634532 systemd[1]: libpod-1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f.scope: Deactivated successfully.
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.522447262 +0000 UTC m=+0.551709062 container died 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:41:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:34 np0005634532 systemd[1]: var-lib-containers-storage-overlay-444303a2bc6ee604d446cb00b449925bd7c61332508e2f225dd6dafee91aaf2a-merged.mount: Deactivated successfully.
Mar  1 04:41:34 np0005634532 podman[81965]: 2026-03-01 09:41:34.565771619 +0000 UTC m=+0.595033469 container remove 1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f (image=quay.io/ceph/ceph:v19, name=hopeful_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:34 np0005634532 systemd[1]: libpod-conmon-1f590d2f727eb01c06c3c1d1feb581a9bff7efe592f0414e0441aedd5cd9f71f.scope: Deactivated successfully.
Mar  1 04:41:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:34 np0005634532 python3[82106]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:34 np0005634532 podman[82107]: 2026-03-01 09:41:34.991936541 +0000 UTC m=+0.041028864 container create 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:35 np0005634532 systemd[1]: Started libpod-conmon-2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a.scope.
Mar  1 04:41:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f953edb2a193faa9df1ed9b0f414d0541e00380937ab378264472d923a1dcc9b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f953edb2a193faa9df1ed9b0f414d0541e00380937ab378264472d923a1dcc9b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f953edb2a193faa9df1ed9b0f414d0541e00380937ab378264472d923a1dcc9b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:35 np0005634532 podman[82107]: 2026-03-01 09:41:34.968693577 +0000 UTC m=+0.017785900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:35 np0005634532 podman[82107]: 2026-03-01 09:41:35.082156957 +0000 UTC m=+0.131249330 container init 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:41:35 np0005634532 podman[82107]: 2026-03-01 09:41:35.087175252 +0000 UTC m=+0.136267575 container start 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:41:35 np0005634532 podman[82107]: 2026-03-01 09:41:35.091264496 +0000 UTC m=+0.140356879 container attach 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: Reconfiguring mgr.compute-0.ebwufc (unknown last config time)...
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2562191376' entity='client.admin' 
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0)
Mar  1 04:41:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/11515952' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/11515952' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/11515952' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Mar  1 04:41:36 np0005634532 festive_payne[82121]: set require_min_compat_client to mimic
Mar  1 04:41:36 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Mar  1 04:41:36 np0005634532 systemd[1]: libpod-2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a.scope: Deactivated successfully.
Mar  1 04:41:36 np0005634532 podman[82107]: 2026-03-01 09:41:36.19515173 +0000 UTC m=+1.244244013 container died 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:41:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f953edb2a193faa9df1ed9b0f414d0541e00380937ab378264472d923a1dcc9b-merged.mount: Deactivated successfully.
Mar  1 04:41:36 np0005634532 podman[82107]: 2026-03-01 09:41:36.229163383 +0000 UTC m=+1.278255666 container remove 2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a (image=quay.io/ceph/ceph:v19, name=festive_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:41:36 np0005634532 systemd[1]: libpod-conmon-2c6f8754fdca249a544f98aebf5c581a1f69a94b75c6991b34102a11afca167a.scope: Deactivated successfully.
Mar  1 04:41:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:36 np0005634532 python3[82184]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:36 np0005634532 podman[82185]: 2026-03-01 09:41:36.870295629 +0000 UTC m=+0.051409874 container create 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:36 np0005634532 systemd[1]: Started libpod-conmon-7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f.scope.
Mar  1 04:41:36 np0005634532 podman[82185]: 2026-03-01 09:41:36.844075575 +0000 UTC m=+0.025189890 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213fdc63937cbd26a7506567883f7d0bdcdec88162abf1bcc1359875586b2799/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213fdc63937cbd26a7506567883f7d0bdcdec88162abf1bcc1359875586b2799/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/213fdc63937cbd26a7506567883f7d0bdcdec88162abf1bcc1359875586b2799/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:36 np0005634532 podman[82185]: 2026-03-01 09:41:36.967485405 +0000 UTC m=+0.148599670 container init 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:41:36 np0005634532 podman[82185]: 2026-03-01 09:41:36.976114813 +0000 UTC m=+0.157229038 container start 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:41:36 np0005634532 podman[82185]: 2026-03-01 09:41:36.979547162 +0000 UTC m=+0.160661427 container attach 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/11515952' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Mar  1 04:41:37 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14172 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:37 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Added host compute-0
Mar  1 04:41:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-0
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:41:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v7: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: Added host compute-0
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:41:38 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-1
Mar  1 04:41:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-1
Mar  1 04:41:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:40 np0005634532 ceph-mon[75825]: Deploying cephadm binary to compute-1
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:41:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:41:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:43 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Added host compute-1
Mar  1 04:41:43 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-1
Mar  1 04:41:43 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:41:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:41:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:44 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-2
Mar  1 04:41:44 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-2
Mar  1 04:41:44 np0005634532 ceph-mon[75825]: Added host compute-1
Mar  1 04:41:44 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:44 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:41:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:45 np0005634532 ceph-mon[75825]: Deploying cephadm binary to compute-2
Mar  1 04:41:45 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v11: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0)
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Added host compute-2
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Added host compute-2
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mon spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mgr spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Marking host: compute-1 for OSDSpec preview refresh.
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Marking host: compute-1 for OSDSpec preview refresh.
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0)
Mar  1 04:41:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Added host 'compute-0' with addr '192.168.122.100'
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Added host 'compute-1' with addr '192.168.122.101'
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Added host 'compute-2' with addr '192.168.122.102'
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Scheduled mon update...
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Scheduled mgr update...
Mar  1 04:41:48 np0005634532 distracted_davinci[82200]: Scheduled osd.default_drive_group update...
Mar  1 04:41:48 np0005634532 systemd[1]: libpod-7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f.scope: Deactivated successfully.
Mar  1 04:41:48 np0005634532 podman[82185]: 2026-03-01 09:41:48.829809605 +0000 UTC m=+12.010923860 container died 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-213fdc63937cbd26a7506567883f7d0bdcdec88162abf1bcc1359875586b2799-merged.mount: Deactivated successfully.
Mar  1 04:41:48 np0005634532 podman[82185]: 2026-03-01 09:41:48.873139452 +0000 UTC m=+12.054253717 container remove 7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f (image=quay.io/ceph/ceph:v19, name=distracted_davinci, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Mar  1 04:41:48 np0005634532 systemd[1]: libpod-conmon-7fdcfa450ea4bb8d0931250e1c586702478119f734f2bfc4c9b0ae6fec52336f.scope: Deactivated successfully.
Mar  1 04:41:49 np0005634532 python3[82356]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.355726323 +0000 UTC m=+0.046303306 container create ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:41:49 np0005634532 systemd[1]: Started libpod-conmon-ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7.scope.
Mar  1 04:41:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:41:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b3665682184aac6a36d294fef891915a164ce68baf79a71ae1ec123528a518/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b3665682184aac6a36d294fef891915a164ce68baf79a71ae1ec123528a518/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b3665682184aac6a36d294fef891915a164ce68baf79a71ae1ec123528a518/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.337687138 +0000 UTC m=+0.028264141 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.448432516 +0000 UTC m=+0.139009579 container init ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.4542586 +0000 UTC m=+0.144835583 container start ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.465357795 +0000 UTC m=+0.155934818 container attach ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Added host compute-2
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Saving service mon spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Saving service mgr spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Marking host: compute-0 for OSDSpec preview refresh.
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Marking host: compute-1 for OSDSpec preview refresh.
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: Saving service osd.default_drive_group spec with placement compute-0;compute-1;compute-2
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Mar  1 04:41:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492851404' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Mar  1 04:41:49 np0005634532 loving_almeida[82374]: 
Mar  1 04:41:49 np0005634532 loving_almeida[82374]: {"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":54,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-03-01T09:40:52:961395+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2026-03-01T09:40:52.964363+0000","services":{}},"progress_events":{}}
Mar  1 04:41:49 np0005634532 systemd[1]: libpod-ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7.scope: Deactivated successfully.
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.873043504 +0000 UTC m=+0.563620567 container died ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:41:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b7b3665682184aac6a36d294fef891915a164ce68baf79a71ae1ec123528a518-merged.mount: Deactivated successfully.
Mar  1 04:41:49 np0005634532 podman[82358]: 2026-03-01 09:41:49.911529299 +0000 UTC m=+0.602106282 container remove ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7 (image=quay.io/ceph/ceph:v19, name=loving_almeida, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:41:49 np0005634532 systemd[1]: libpod-conmon-ff4ae4bfe0c079c4d31d792fd7d65f4f21410773f18045f0be287bea563ea6d7.scope: Deactivated successfully.
Mar  1 04:41:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v14: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:41:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:41:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v17: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:42:01 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:42:01 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:01 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:42:02 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:02 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:02 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:42:02 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:02 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:03 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:03 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:03 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:03 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: [cephadm ERROR cephadm.serve] Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: [cephadm ERROR cephadm.serve] Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev dbf09567-82c3-4214-a043-b3a918dd480d (Updating crash deployment (+1 -> 2))
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:42:04.165+0000 7f30c7086640 -1 log_channel(cephadm) log [ERR] : Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: service_name: mon
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: placement:
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  hosts:
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-0
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-1
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-2
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:42:04.165+0000 7f30c7086640 -1 log_channel(cephadm) log [ERR] : Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: service_name: mgr
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: placement:
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  hosts:
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-0
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-1
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  - compute-2
Mar  1 04:42:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-1 on compute-1
Mar  1 04:42:04 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-1 on compute-1
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: Failed to apply mon spec MONSpec.from_json(yaml.safe_load('''service_type: mon#012service_name: mon#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <MONSpec for service_name=mon> on compute-2: Unknown hosts
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: Failed to apply mgr spec ServiceSpec.from_json(yaml.safe_load('''service_type: mgr#012service_name: mgr#012placement:#012  hosts:#012  - compute-0#012  - compute-1#012  - compute-2#012''')): Cannot place <ServiceSpec for service_name=mgr> on compute-2: Unknown hosts
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:42:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: Deploying daemon crash.compute-1 on compute-1
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: Health check failed: Failed to apply 2 service(s): mon,mgr (CEPHADM_APPLY_SPEC_FAIL)
Mar  1 04:42:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:06 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev dbf09567-82c3-4214-a043-b3a918dd480d (Updating crash deployment (+1 -> 2))
Mar  1 04:42:06 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event dbf09567-82c3-4214-a043-b3a918dd480d (Updating crash deployment (+1 -> 2)) in 2 seconds
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:06 np0005634532 podman[82506]: 2026-03-01 09:42:06.956681612 +0000 UTC m=+0.036757326 container create c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:42:06 np0005634532 systemd[1]: Started libpod-conmon-c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b.scope.
Mar  1 04:42:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:07.030944331 +0000 UTC m=+0.111020085 container init c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:06.940638963 +0000 UTC m=+0.020714707 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:07.03877991 +0000 UTC m=+0.118855624 container start c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:07.043082099 +0000 UTC m=+0.123157863 container attach c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:42:07 np0005634532 magical_kirch[82522]: 167 167
Mar  1 04:42:07 np0005634532 systemd[1]: libpod-c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b.scope: Deactivated successfully.
Mar  1 04:42:07 np0005634532 conmon[82522]: conmon c472d1fba71314a77222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b.scope/container/memory.events
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:07.047537271 +0000 UTC m=+0.127612985 container died c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:42:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6f213ae101e669100dab58363593005f277f0881e1a7bbb76df19861c334d47d-merged.mount: Deactivated successfully.
Mar  1 04:42:07 np0005634532 podman[82506]: 2026-03-01 09:42:07.081146775 +0000 UTC m=+0.161222519 container remove c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=magical_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:07 np0005634532 systemd[1]: libpod-conmon-c472d1fba71314a77222f2c736fa82f46de78840fe15f48bd980e109a1c91e5b.scope: Deactivated successfully.
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:07 np0005634532 podman[82546]: 2026-03-01 09:42:07.266955209 +0000 UTC m=+0.055319074 container create c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:07 np0005634532 systemd[1]: Started libpod-conmon-c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770.scope.
Mar  1 04:42:07 np0005634532 podman[82546]: 2026-03-01 09:42:07.244063512 +0000 UTC m=+0.032427437 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:07 np0005634532 podman[82546]: 2026-03-01 09:42:07.371936214 +0000 UTC m=+0.160300119 container init c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:42:07 np0005634532 podman[82546]: 2026-03-01 09:42:07.382606419 +0000 UTC m=+0.170970264 container start c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:42:07 np0005634532 podman[82546]: 2026-03-01 09:42:07.387064742 +0000 UTC m=+0.175428677 container attach c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:42:07 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 2 completed events
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:42:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:07 np0005634532 amazing_hertz[82563]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:42:07 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:07 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:07 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new e5da778e-73b7-4ea1-8a91-750fe3f6aa68
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68"} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395872988' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68"}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/395872988' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68"}]': finished
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v23: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "5c75a2df-237b-4193-9208-5d16a78b0f53"} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/972808086' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c75a2df-237b-4193-9208-5d16a78b0f53"}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:08 np0005634532 lvm[82625]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:42:08 np0005634532 lvm[82625]: VG ceph_vg0 finished
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.101:0/972808086' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5c75a2df-237b-4193-9208-5d16a78b0f53"}]': finished
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:08 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766469908' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: stderr: got monmap epoch 1
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: --> Creating keyring file for osd.0
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/395872988' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68"}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/395872988' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68"}]': finished
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/972808086' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "5c75a2df-237b-4193-9208-5d16a78b0f53"}]: dispatch
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/972808086' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "5c75a2df-237b-4193-9208-5d16a78b0f53"}]': finished
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Mar  1 04:42:08 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid e5da778e-73b7-4ea1-8a91-750fe3f6aa68 --setuser ceph --setgroup ceph
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0)
Mar  1 04:42:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/3189777372' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Mar  1 04:42:09 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Mar  1 04:42:09 np0005634532 ceph-mon[75825]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Mar  1 04:42:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v25: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: stderr: 2026-03-01T09:42:08.831+0000 7f9c03d34740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) No valid bdev label found
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: stderr: 2026-03-01T09:42:09.093+0000 7f9c03d34740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: --> ceph-volume lvm activate successful for osd ID: 0
Mar  1 04:42:12 np0005634532 amazing_hertz[82563]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Mar  1 04:42:12 np0005634532 systemd[1]: libpod-c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770.scope: Deactivated successfully.
Mar  1 04:42:12 np0005634532 systemd[1]: libpod-c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770.scope: Consumed 1.883s CPU time.
Mar  1 04:42:12 np0005634532 podman[82546]: 2026-03-01 09:42:12.512164469 +0000 UTC m=+5.300528344 container died c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:12 np0005634532 systemd[1]: var-lib-containers-storage-overlay-22c56869e6c3f3a312042a2fd98f4e68772b87bbd6a4e08ceabd3537410812b6-merged.mount: Deactivated successfully.
Mar  1 04:42:12 np0005634532 podman[82546]: 2026-03-01 09:42:12.559298963 +0000 UTC m=+5.347662808 container remove c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_hertz, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:42:12 np0005634532 systemd[1]: libpod-conmon-c7320e38a6a94788572fcbca810e49765617e5af209f92d5b72986ab353a7770.scope: Deactivated successfully.
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:42:12
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] No pools available
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.068384774 +0000 UTC m=+0.057180796 container create 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:13 np0005634532 systemd[1]: Started libpod-conmon-9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25.scope.
Mar  1 04:42:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.03993956 +0000 UTC m=+0.028735642 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.147679259 +0000 UTC m=+0.136475291 container init 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.155231652 +0000 UTC m=+0.144027674 container start 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.158995149 +0000 UTC m=+0.147791231 container attach 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:42:13 np0005634532 hungry_wilson[83651]: 167 167
Mar  1 04:42:13 np0005634532 systemd[1]: libpod-9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25.scope: Deactivated successfully.
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.162321425 +0000 UTC m=+0.151117427 container died 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ef269d5c8887200b9e18413eeeac5c03ee96b9c744481270982c959d5d15390e-merged.mount: Deactivated successfully.
Mar  1 04:42:13 np0005634532 podman[83634]: 2026-03-01 09:42:13.195845437 +0000 UTC m=+0.184641439 container remove 9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:13 np0005634532 systemd[1]: libpod-conmon-9da2ca0fa72ccc4e7992535b732d40904cf0948bed17a7aca0eaba9b68e6cb25.scope: Deactivated successfully.
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.379239625 +0000 UTC m=+0.055162550 container create b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:42:13 np0005634532 systemd[1]: Started libpod-conmon-b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c.scope.
Mar  1 04:42:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64283d603ce89136df23c68913c7eaf72cdb32832f88581d51642e52dc59e6c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64283d603ce89136df23c68913c7eaf72cdb32832f88581d51642e52dc59e6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.357946346 +0000 UTC m=+0.033869251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64283d603ce89136df23c68913c7eaf72cdb32832f88581d51642e52dc59e6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a64283d603ce89136df23c68913c7eaf72cdb32832f88581d51642e52dc59e6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.480391322 +0000 UTC m=+0.156314207 container init b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.486354449 +0000 UTC m=+0.162277374 container start b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.490982776 +0000 UTC m=+0.166905721 container attach b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]: {
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:    "0": [
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:        {
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "devices": [
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "/dev/loop3"
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            ],
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "lv_name": "ceph_lv0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "lv_size": "21470642176",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "name": "ceph_lv0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "tags": {
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.cluster_name": "ceph",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.crush_device_class": "",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.encrypted": "0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.osd_id": "0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.type": "block",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.vdo": "0",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:                "ceph.with_tpm": "0"
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            },
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "type": "block",
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:            "vg_name": "ceph_vg0"
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:        }
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]:    ]
Mar  1 04:42:13 np0005634532 epic_varahamihira[83693]: }
Mar  1 04:42:13 np0005634532 systemd[1]: libpod-b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c.scope: Deactivated successfully.
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.771399377 +0000 UTC m=+0.447322282 container died b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:42:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a64283d603ce89136df23c68913c7eaf72cdb32832f88581d51642e52dc59e6c-merged.mount: Deactivated successfully.
Mar  1 04:42:13 np0005634532 podman[83676]: 2026-03-01 09:42:13.815936571 +0000 UTC m=+0.491859486 container remove b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:13 np0005634532 systemd[1]: libpod-conmon-b8514f23e0ef8f5de10f3df6458a809a72a2cd7914993025a353b92228dbaf9c.scope: Deactivated successfully.
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:13 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Mar  1 04:42:13 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:13 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-1
Mar  1 04:42:13 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-1
Mar  1 04:42:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v27: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.387090419 +0000 UTC m=+0.054986536 container create 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:42:14 np0005634532 systemd[1]: Started libpod-conmon-1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c.scope.
Mar  1 04:42:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.367645802 +0000 UTC m=+0.035541919 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.476159578 +0000 UTC m=+0.144055695 container init 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.485680717 +0000 UTC m=+0.153576834 container start 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.489637708 +0000 UTC m=+0.157533795 container attach 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:42:14 np0005634532 romantic_hopper[83817]: 167 167
Mar  1 04:42:14 np0005634532 systemd[1]: libpod-1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c.scope: Deactivated successfully.
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.494594482 +0000 UTC m=+0.162490589 container died 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-177b1179438f883df2ada0eb2a463137ef5066255f08c92c950b279c257fc9f6-merged.mount: Deactivated successfully.
Mar  1 04:42:14 np0005634532 podman[83801]: 2026-03-01 09:42:14.53969384 +0000 UTC m=+0.207589917 container remove 1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_hopper, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:42:14 np0005634532 systemd[1]: libpod-conmon-1aa10caf9cffd0cd54052566a921a68a3d8a41c809bd0fbb2f0b47e1555ad58c.scope: Deactivated successfully.
Mar  1 04:42:14 np0005634532 podman[83846]: 2026-03-01 09:42:14.832142267 +0000 UTC m=+0.060346169 container create e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:42:14 np0005634532 systemd[1]: Started libpod-conmon-e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195.scope.
Mar  1 04:42:14 np0005634532 podman[83846]: 2026-03-01 09:42:14.81010983 +0000 UTC m=+0.038313742 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:14 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:14 np0005634532 ceph-mon[75825]: Deploying daemon osd.0 on compute-0
Mar  1 04:42:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Mar  1 04:42:14 np0005634532 ceph-mon[75825]: Deploying daemon osd.1 on compute-1
Mar  1 04:42:14 np0005634532 podman[83846]: 2026-03-01 09:42:14.942839994 +0000 UTC m=+0.171043856 container init e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:14 np0005634532 podman[83846]: 2026-03-01 09:42:14.960336676 +0000 UTC m=+0.188540568 container start e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:42:14 np0005634532 podman[83846]: 2026-03-01 09:42:14.965576467 +0000 UTC m=+0.193780359 container attach e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:42:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test[83862]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_FSID]
Mar  1 04:42:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test[83862]:                            [--no-systemd] [--no-tmpfs]
Mar  1 04:42:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test[83862]: ceph-volume activate: error: unrecognized arguments: --bad-option
Mar  1 04:42:15 np0005634532 systemd[1]: libpod-e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195.scope: Deactivated successfully.
Mar  1 04:42:15 np0005634532 podman[83846]: 2026-03-01 09:42:15.15829874 +0000 UTC m=+0.386502632 container died e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:42:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-77344902e70853603b3bdc90a81baa8a7dc132157cb57e66e2841f5d9f0ed204-merged.mount: Deactivated successfully.
Mar  1 04:42:15 np0005634532 podman[83846]: 2026-03-01 09:42:15.20350711 +0000 UTC m=+0.431711002 container remove e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:42:15 np0005634532 systemd[1]: libpod-conmon-e49641f75f14883897a3e392216a708700a0e2c6e6a3e4778cbaf277ad603195.scope: Deactivated successfully.
Mar  1 04:42:15 np0005634532 systemd[1]: Reloading.
Mar  1 04:42:15 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:42:15 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:42:15 np0005634532 systemd[1]: Reloading.
Mar  1 04:42:15 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:42:15 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:42:16 np0005634532 systemd[1]: Starting Ceph osd.0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:42:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:16 np0005634532 podman[84042]: 2026-03-01 09:42:16.277112357 +0000 UTC m=+0.050003921 container create 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:16 np0005634532 podman[84042]: 2026-03-01 09:42:16.252706596 +0000 UTC m=+0.025598220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:16 np0005634532 podman[84042]: 2026-03-01 09:42:16.363900084 +0000 UTC m=+0.136791618 container init 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:16 np0005634532 podman[84042]: 2026-03-01 09:42:16.37111591 +0000 UTC m=+0.144007444 container start 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 04:42:16 np0005634532 podman[84042]: 2026-03-01 09:42:16.374217721 +0000 UTC m=+0.147109255 container attach 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:16 np0005634532 bash[84042]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:16 np0005634532 bash[84042]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:17 np0005634532 lvm[84138]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:42:17 np0005634532 lvm[84138]: VG ceph_vg0 finished
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: --> Failed to activate via raw: did not find any matching OSD to activate
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:17 np0005634532 bash[84042]: --> Failed to activate via raw: did not find any matching OSD to activate
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/ceph-authtool --gen-print-key
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:17 np0005634532 bash[84042]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Mar  1 04:42:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate[84057]: --> ceph-volume lvm activate successful for osd ID: 0
Mar  1 04:42:17 np0005634532 bash[84042]: --> ceph-volume lvm activate successful for osd ID: 0
Mar  1 04:42:17 np0005634532 systemd[1]: libpod-05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3.scope: Deactivated successfully.
Mar  1 04:42:17 np0005634532 podman[84042]: 2026-03-01 09:42:17.682480647 +0000 UTC m=+1.455372191 container died 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:42:17 np0005634532 systemd[1]: libpod-05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3.scope: Consumed 1.449s CPU time.
Mar  1 04:42:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d1a5cf689f46b8bcef8603624598199ce48f98316bc1db5d4b6deab56c6d3326-merged.mount: Deactivated successfully.
Mar  1 04:42:17 np0005634532 podman[84042]: 2026-03-01 09:42:17.726510569 +0000 UTC m=+1.499402093 container remove 05a1e74a1636433a39473882f0b15571c9a36a5b5969b167a53245196f6b02c3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:17 np0005634532 podman[84290]: 2026-03-01 09:42:17.940499531 +0000 UTC m=+0.047002282 container create 5aef2e36e2e1b8957fae728e57d303760058bc33d5dcbfeb03b4933da584bc87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:42:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71905ec1bebb998eb108e332d0afb4fbe9cdf5c7b86253212bc9ffca0f0833ca/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71905ec1bebb998eb108e332d0afb4fbe9cdf5c7b86253212bc9ffca0f0833ca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71905ec1bebb998eb108e332d0afb4fbe9cdf5c7b86253212bc9ffca0f0833ca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71905ec1bebb998eb108e332d0afb4fbe9cdf5c7b86253212bc9ffca0f0833ca/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/71905ec1bebb998eb108e332d0afb4fbe9cdf5c7b86253212bc9ffca0f0833ca/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:18 np0005634532 podman[84290]: 2026-03-01 09:42:18.007721937 +0000 UTC m=+0.114224698 container init 5aef2e36e2e1b8957fae728e57d303760058bc33d5dcbfeb03b4933da584bc87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:18 np0005634532 podman[84290]: 2026-03-01 09:42:17.917351399 +0000 UTC m=+0.023854190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:18 np0005634532 podman[84290]: 2026-03-01 09:42:18.021792291 +0000 UTC m=+0.128295032 container start 5aef2e36e2e1b8957fae728e57d303760058bc33d5dcbfeb03b4933da584bc87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:18 np0005634532 bash[84290]: 5aef2e36e2e1b8957fae728e57d303760058bc33d5dcbfeb03b4933da584bc87
Mar  1 04:42:18 np0005634532 systemd[1]: Started Ceph osd.0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-osd, pid 2
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: pidfile_write: ignore empty --pid-file
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v29: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.602274465 +0000 UTC m=+0.038976148 container create 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:42:18 np0005634532 systemd[1]: Started libpod-conmon-607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952.scope.
Mar  1 04:42:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.679834459 +0000 UTC m=+0.116536202 container init 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.584038495 +0000 UTC m=+0.020740218 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.687386352 +0000 UTC m=+0.124088085 container start 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.691908257 +0000 UTC m=+0.128609980 container attach 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:42:18 np0005634532 jolly_herschel[84435]: 167 167
Mar  1 04:42:18 np0005634532 systemd[1]: libpod-607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952.scope: Deactivated successfully.
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.694298892 +0000 UTC m=+0.131000625 container died 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5c532172bba3783588dcd97949a961a165507386d7279bf095fd88ff6d473feb-merged.mount: Deactivated successfully.
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021debc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021debc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021debc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021debc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021debc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 podman[84419]: 2026-03-01 09:42:18.740471174 +0000 UTC m=+0.177172877 container remove 607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:18 np0005634532 ceph-osd[84309]: bdev(0x55d021deb800 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:18 np0005634532 systemd[1]: libpod-conmon-607b54f685303cd80b837d7976e20bd7910bd77b44307418286da480c0f8a952.scope: Deactivated successfully.
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:18 np0005634532 podman[84465]: 2026-03-01 09:42:18.892353508 +0000 UTC m=+0.055115539 container create 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:18 np0005634532 systemd[1]: Started libpod-conmon-0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4.scope.
Mar  1 04:42:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9233b2765f0b0dfbaa8c46e4423c77b183c662b115ae1a69eb6348414db3089/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9233b2765f0b0dfbaa8c46e4423c77b183c662b115ae1a69eb6348414db3089/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9233b2765f0b0dfbaa8c46e4423c77b183c662b115ae1a69eb6348414db3089/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9233b2765f0b0dfbaa8c46e4423c77b183c662b115ae1a69eb6348414db3089/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:18 np0005634532 podman[84465]: 2026-03-01 09:42:18.872607923 +0000 UTC m=+0.035370034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:18 np0005634532 podman[84465]: 2026-03-01 09:42:18.978566081 +0000 UTC m=+0.141328192 container init 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:18 np0005634532 podman[84465]: 2026-03-01 09:42:18.985249115 +0000 UTC m=+0.148011186 container start 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 04:42:18 np0005634532 podman[84465]: 2026-03-01 09:42:18.989686257 +0000 UTC m=+0.152448348 container attach 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: load: jerasure load: lrc 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:19 np0005634532 lvm[84567]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:42:19 np0005634532 lvm[84567]: VG ceph_vg0 finished
Mar  1 04:42:19 np0005634532 busy_wing[84481]: {}
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:19 np0005634532 systemd[1]: libpod-0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4.scope: Deactivated successfully.
Mar  1 04:42:19 np0005634532 podman[84465]: 2026-03-01 09:42:19.578655565 +0000 UTC m=+0.741417676 container died 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d9233b2765f0b0dfbaa8c46e4423c77b183c662b115ae1a69eb6348414db3089-merged.mount: Deactivated successfully.
Mar  1 04:42:19 np0005634532 podman[84465]: 2026-03-01 09:42:19.618798919 +0000 UTC m=+0.781560950 container remove 0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=busy_wing, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:42:19 np0005634532 systemd[1]: libpod-conmon-0ed97e3be4727c1067e728202df8de8838edf6dfa498b96db317571fbb272dd4.scope: Deactivated successfully.
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbcc00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs mount shared_bdev_used = 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: RocksDB version: 7.9.2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Git sha 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Compile date 2025-07-17 03:12:14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: DB SUMMARY
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: DB Session ID:  8Q5WDK5UFSY89Z1E3YYT
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: CURRENT file:  CURRENT
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: IDENTITY file:  IDENTITY
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                         Options.error_if_exists: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.create_if_missing: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                         Options.paranoid_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.flush_verify_memtable_count: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                                     Options.env: 0x55d022c57dc0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                                      Options.fs: LegacyFileSystem
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                                Options.info_log: 0x55d022c5b7a0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_file_opening_threads: 16
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                              Options.statistics: (nil)
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.use_fsync: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.max_log_file_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.log_file_time_to_roll: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.keep_log_file_num: 1000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.recycle_log_file_num: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                         Options.allow_fallocate: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.allow_mmap_reads: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.allow_mmap_writes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.use_direct_reads: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.create_missing_column_families: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                              Options.db_log_dir: 
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                                 Options.wal_dir: db.wal
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.table_cache_numshardbits: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                         Options.WAL_ttl_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.WAL_size_limit_MB: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.manifest_preallocation_size: 4194304
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                     Options.is_fd_close_on_exec: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.advise_random_on_open: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.db_write_buffer_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.write_buffer_manager: 0x55d022d88a00
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.access_hint_on_compaction_start: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                      Options.use_adaptive_mutex: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                            Options.rate_limiter: (nil)
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.wal_recovery_mode: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.enable_thread_tracking: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.enable_pipelined_write: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.unordered_write: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.write_thread_max_yield_usec: 100
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.row_cache: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                              Options.wal_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_flush_during_recovery: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.allow_ingest_behind: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.two_write_queues: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.manual_wal_flush: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.wal_compression: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.atomic_flush: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.persist_stats_to_disk: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.write_dbid_to_manifest: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.log_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.best_efforts_recovery: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.allow_data_in_errors: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.db_host_id: __hostname__
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.enforce_single_del_contracts: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_background_jobs: 4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_background_compactions: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_subcompactions: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.writable_file_max_buffer_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.delayed_write_rate : 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_total_wal_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.stats_dump_period_sec: 600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.stats_persist_period_sec: 600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.max_open_files: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.bytes_per_sync: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                      Options.wal_bytes_per_sync: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.strict_bytes_per_sync: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.compaction_readahead_size: 2097152
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.max_background_flushes: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Compression algorithms supported:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kZSTD supported: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kXpressCompression supported: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kBZip2Compression supported: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kLZ4Compression supported: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kZlibCompression supported: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kLZ4HCCompression supported: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: #011kSnappyCompression supported: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Fast CRC32 supported: Supported on x86
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: DMutex implementation: pthread_mutex_t
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb60)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bb80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0c4d6be4-79a1-4456-9644-a01466f8fa1e
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358139912105, "job": 1, "event": "recovery_started", "wal_files": [31]}
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358139912331, "job": 1, "event": "recovery_finished"}
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: freelist init
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: freelist _read_cfg
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bluefs umount
Mar  1 04:42:19 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) close
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open backing device/file reports st_blksize 512, using bdev_block_size 4096 anyway
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bdev(0x55d022cbd000 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 0 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 1 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 2 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 3 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount final locked allocations 4 <0x0~0, [0x0~0], 0x0~0> => <0x0~0, [0x0~0], 0x0~0>
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluefs mount shared_bdev_used = 4718592
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: RocksDB version: 7.9.2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Git sha 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Compile date 2025-07-17 03:12:14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: DB SUMMARY
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: DB Session ID:  8Q5WDK5UFSY89Z1E3YYS
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: CURRENT file:  CURRENT
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: IDENTITY file:  IDENTITY
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5097 ; 
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                         Options.error_if_exists: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.create_if_missing: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                         Options.paranoid_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.flush_verify_memtable_count: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                                     Options.env: 0x55d022e2c2a0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                                      Options.fs: LegacyFileSystem
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                                Options.info_log: 0x55d022c5b920
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_file_opening_threads: 16
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                              Options.statistics: (nil)
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.use_fsync: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.max_log_file_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.log_file_time_to_roll: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.keep_log_file_num: 1000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.recycle_log_file_num: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                         Options.allow_fallocate: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.allow_mmap_reads: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.allow_mmap_writes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.use_direct_reads: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.create_missing_column_families: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                              Options.db_log_dir: 
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                                 Options.wal_dir: db.wal
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.table_cache_numshardbits: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                         Options.WAL_ttl_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.WAL_size_limit_MB: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.manifest_preallocation_size: 4194304
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                     Options.is_fd_close_on_exec: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.advise_random_on_open: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.db_write_buffer_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.write_buffer_manager: 0x55d022d88a00
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.access_hint_on_compaction_start: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                      Options.use_adaptive_mutex: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                            Options.rate_limiter: (nil)
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.wal_recovery_mode: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.enable_thread_tracking: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.enable_pipelined_write: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.unordered_write: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.write_thread_max_yield_usec: 100
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.row_cache: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                              Options.wal_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_flush_during_recovery: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.allow_ingest_behind: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.two_write_queues: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.manual_wal_flush: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.wal_compression: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.atomic_flush: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.persist_stats_to_disk: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.write_dbid_to_manifest: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.log_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.best_efforts_recovery: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.allow_data_in_errors: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.db_host_id: __hostname__
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.enforce_single_del_contracts: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_background_jobs: 4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_background_compactions: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_subcompactions: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.writable_file_max_buffer_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.delayed_write_rate : 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.max_total_wal_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.stats_dump_period_sec: 600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.stats_persist_period_sec: 600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.max_open_files: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.bytes_per_sync: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                      Options.wal_bytes_per_sync: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.strict_bytes_per_sync: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.compaction_readahead_size: 2097152
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.max_background_flushes: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Compression algorithms supported:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kZSTD supported: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kXpressCompression supported: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kBZip2Compression supported: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kLZ4Compression supported: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kZlibCompression supported: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kLZ4HCCompression supported: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: #011kSnappyCompression supported: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Fast CRC32 supported: Supported on x86
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: DMutex implementation: pthread_mutex_t
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5b680)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e81350#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:           Options.merge_operator: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.compaction_filter_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.sst_partitioner_factory: None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.memtable_factory: SkipListFactory
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.table_factory: BlockBasedTable
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55d022c5bac0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55d021e809b0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.write_buffer_size: 16777216
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.max_write_buffer_number: 64
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.compression: LZ4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression: Disabled
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.num_levels: 7
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:            Options.compression_opts.window_bits: -14
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.level: 32767
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.compression_opts.strategy: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.parallel_threads: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                  Options.compression_opts.enabled: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:              Options.level0_stop_writes_trigger: 36
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.target_file_size_base: 67108864
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:             Options.target_file_size_multiplier: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.arena_block_size: 1048576
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.disable_auto_compactions: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.inplace_update_support: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                 Options.inplace_update_num_locks: 10000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:               Options.memtable_whole_key_filtering: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:   Options.memtable_huge_page_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.bloom_locality: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                    Options.max_successive_merges: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.optimize_filters_for_hits: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.paranoid_file_checks: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.force_consistency_checks: 1
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.report_bg_io_stats: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                               Options.ttl: 2592000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.periodic_compaction_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:    Options.preserve_internal_time_seconds: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                       Options.enable_blob_files: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                           Options.min_blob_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                          Options.blob_file_size: 268435456
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                   Options.blob_compression_type: NoCompression
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.enable_blob_garbage_collection: false
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:          Options.blob_compaction_readahead_size: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb:                Options.blob_file_starting_level: 0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Mar  1 04:42:20 np0005634532 python3[84850]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 0c4d6be4-79a1-4456-9644-a01466f8fa1e
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358140201970, "job": 1, "event": "recovery_started", "wal_files": [31]}
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358140205868, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358140, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c4d6be4-79a1-4456-9644-a01466f8fa1e", "db_session_id": "8Q5WDK5UFSY89Z1E3YYS", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358140211821, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1595, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 469, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 571, "raw_average_value_size": 285, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358140, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c4d6be4-79a1-4456-9644-a01466f8fa1e", "db_session_id": "8Q5WDK5UFSY89Z1E3YYS", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358140216358, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358140, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "0c4d6be4-79a1-4456-9644-a01466f8fa1e", "db_session_id": "8Q5WDK5UFSY89Z1E3YYS", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358140218573, "job": 1, "event": "recovery_finished"}
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55d022e7c000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: DB pointer 0x55d022e38000
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 3.3e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/19.2.3/rpm/el9/BUILD/ceph-19.2.3/src/cls/hello/cls_hello.cc:316: loading cls_hello
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: _get_class not permitted to load lua
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: _get_class not permitted to load sdk
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 load_pgs
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 load_pgs opened 0 pgs
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.253569861 +0000 UTC m=+0.036718026 container create 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:20 np0005634532 ceph-osd[84309]: osd.0 0 log_to_monitors true
Mar  1 04:42:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0[84305]: 2026-03-01T09:42:20.254+0000 7ffaff02c740 -1 osd.0 0 log_to_monitors true
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Mar  1 04:42:20 np0005634532 systemd[1]: Started libpod-conmon-7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef.scope.
Mar  1 04:42:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8333f6ec64cda2406926aa8ad18be45352c6e9130c92f1e3db3688a4996ab/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8333f6ec64cda2406926aa8ad18be45352c6e9130c92f1e3db3688a4996ab/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dde8333f6ec64cda2406926aa8ad18be45352c6e9130c92f1e3db3688a4996ab/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.32135718 +0000 UTC m=+0.104505355 container init 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.326645702 +0000 UTC m=+0.109793857 container start 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.330122382 +0000 UTC m=+0.113270547 container attach 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.237640265 +0000 UTC m=+0.020788440 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 podman[85208]: 2026-03-01 09:42:20.73039093 +0000 UTC m=+0.063180274 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623632949' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Mar  1 04:42:20 np0005634532 zen_borg[85114]: 
Mar  1 04:42:20 np0005634532 zen_borg[85114]: {"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","health":{"status":"HEALTH_WARN","checks":{"CEPHADM_APPLY_SPEC_FAIL":{"severity":"HEALTH_WARN","summary":{"message":"Failed to apply 2 service(s): mon,mgr","count":2},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":85,"monmap":{"epoch":1,"min_mon_release_name":"squid","num_mons":1},"osdmap":{"epoch":5,"num_osds":2,"num_up_osds":0,"osd_up_since":0,"num_in_osds":2,"osd_in_since":1772358128,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"btime":"2026-03-01T09:40:52:961395+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-03-01T09:42:14.169238+0000","services":{}},"progress_events":{}}
Mar  1 04:42:20 np0005634532 systemd[1]: libpod-7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef.scope: Deactivated successfully.
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.778713491 +0000 UTC m=+0.561861657 container died 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-dde8333f6ec64cda2406926aa8ad18be45352c6e9130c92f1e3db3688a4996ab-merged.mount: Deactivated successfully.
Mar  1 04:42:20 np0005634532 podman[85065]: 2026-03-01 09:42:20.814425793 +0000 UTC m=+0.597573948 container remove 7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef (image=quay.io/ceph/ceph:v19, name=zen_borg, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e6 e6: 2 total, 0 up, 2 in
Mar  1 04:42:20 np0005634532 systemd[1]: libpod-conmon-7398b7080ea5b3391b6749cc413cde706324690bad3f3a1970cc0255ff7de6ef.scope: Deactivated successfully.
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e6: 2 total, 0 up, 2 in
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0)
Mar  1 04:42:20 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e6 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-1,root=default}
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:20 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:20 np0005634532 podman[85208]: 2026-03-01 09:42:20.838955757 +0000 UTC m=+0.171745091 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.655231434 +0000 UTC m=+0.053144424 container create 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:21 np0005634532 systemd[1]: Started libpod-conmon-08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144.scope.
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.630753511 +0000 UTC m=+0.028666481 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.749361579 +0000 UTC m=+0.147274559 container init 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.757075137 +0000 UTC m=+0.154988137 container start 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:42:21 np0005634532 lucid_heyrovsky[85416]: 167 167
Mar  1 04:42:21 np0005634532 systemd[1]: libpod-08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144.scope: Deactivated successfully.
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.762969652 +0000 UTC m=+0.160882612 container attach 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.763931455 +0000 UTC m=+0.161844455 container died 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 04:42:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ed1ad23d25372df07ba3d5f50beb950d710add79a50ff5b0bd744aa42432eedb-merged.mount: Deactivated successfully.
Mar  1 04:42:21 np0005634532 podman[85399]: 2026-03-01 09:42:21.820770812 +0000 UTC m=+0.218683782 container remove 08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_heyrovsky, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:21 np0005634532 systemd[1]: libpod-conmon-08606abe58fd8a6584f54fdafef93ba6301c756dc24c8ee9f162583b889d8144.scope: Deactivated successfully.
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e7 e7: 2 total, 0 up, 2 in
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 done with init, starting boot process
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 start_boot
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Mar  1 04:42:21 np0005634532 ceph-osd[84309]: osd.0 0  bench count 12288000 bsize 4 KiB
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e7: 2 total, 0 up, 2 in
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2503527526; not ready for session (expect reconnect)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/743937274; not ready for session (expect reconnect)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:21 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:21 np0005634532 podman[85440]: 2026-03-01 09:42:21.988585393 +0000 UTC m=+0.065536679 container create 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:21.949523324 +0000 UTC m=+0.026474580 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:22 np0005634532 systemd[1]: Started libpod-conmon-714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe.scope.
Mar  1 04:42:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de13e863b6c35a7a6237c1c4223b93cefef8c92c418a97c801579042c94216d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de13e863b6c35a7a6237c1c4223b93cefef8c92c418a97c801579042c94216d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de13e863b6c35a7a6237c1c4223b93cefef8c92c418a97c801579042c94216d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4de13e863b6c35a7a6237c1c4223b93cefef8c92c418a97c801579042c94216d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:22.129804921 +0000 UTC m=+0.206756167 container init 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:22.137285363 +0000 UTC m=+0.214236609 container start 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:22.157657992 +0000 UTC m=+0.234609278 container attach 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:42:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v33: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]: [
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:    {
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "available": false,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "being_replaced": false,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "ceph_device_lvm": false,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "device_id": "QEMU_DVD-ROM_QM00001",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "lsm_data": {},
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "lvs": [],
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "path": "/dev/sr0",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "rejected_reasons": [
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "Has a FileSystem",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "Insufficient space (<5GB)"
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        ],
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        "sys_api": {
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "actuators": null,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "device_nodes": [
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:                "sr0"
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            ],
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "devname": "sr0",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "human_readable_size": "482.00 KB",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "id_bus": "ata",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "model": "QEMU DVD-ROM",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "nr_requests": "2",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "parent": "/dev/sr0",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "partitions": {},
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "path": "/dev/sr0",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "removable": "1",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "rev": "2.5+",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "ro": "0",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "rotational": "1",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "sas_address": "",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "sas_device_handle": "",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "scheduler_mode": "mq-deadline",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "sectors": 0,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "sectorsize": "2048",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "size": 493568.0,
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "support_discard": "2048",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "type": "disk",
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:            "vendor": "QEMU"
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:        }
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]:    }
Mar  1 04:42:22 np0005634532 modest_rhodes[85456]: ]
Mar  1 04:42:22 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2503527526; not ready for session (expect reconnect)
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:22 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:22 np0005634532 systemd[1]: libpod-714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe.scope: Deactivated successfully.
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:22.846288283 +0000 UTC m=+0.923239569 container died 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:22 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/743937274; not ready for session (expect reconnect)
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:22 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: from='osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: from='osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-1", "root=default"]}]': finished
Mar  1 04:42:22 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4de13e863b6c35a7a6237c1c4223b93cefef8c92c418a97c801579042c94216d-merged.mount: Deactivated successfully.
Mar  1 04:42:22 np0005634532 podman[85440]: 2026-03-01 09:42:22.966152071 +0000 UTC m=+1.043103327 container remove 714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_rhodes, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:42:22 np0005634532 systemd[1]: libpod-conmon-714b51d01a553b5ce418d526cebf466b7c69d162905657d851589ebe27e6b9fe.scope: Deactivated successfully.
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 127.9M
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 127.9M
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2503527526; not ready for session (expect reconnect)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/743937274; not ready for session (expect reconnect)
Mar  1 04:42:23 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: Adjusting osd_memory_target on compute-0 to 127.9M
Mar  1 04:42:23 np0005634532 ceph-mon[75825]: Unable to set osd_memory_target on compute-0 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v34: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Adjusting osd_memory_target on compute-1 to  5247M
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-1 to  5247M
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2503527526; not ready for session (expect reconnect)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/743937274; not ready for session (expect reconnect)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:24 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 23.778 iops: 6087.257 elapsed_sec: 0.493
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: log_channel(cluster) log [WRN] : OSD bench result of 6087.256957 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 0 waiting for initial osdmap
Mar  1 04:42:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0[84305]: 2026-03-01T09:42:24.888+0000 7ffafb7c2640 -1 osd.0 0 waiting for initial osdmap
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 crush map has features 288514050185494528, adjusting msgr requires for clients
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 crush map has features 3314932999778484224, adjusting msgr requires for osds
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 check_osdmap_features require_osd_release unknown -> squid
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 set_numa_affinity not setting numa affinity
Mar  1 04:42:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-osd-0[84305]: 2026-03-01T09:42:24.957+0000 7ffaf65d7640 -1 osd.0 7 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Mar  1 04:42:24 np0005634532 ceph-osd[84309]: osd.0 7 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial no unique device path for loop3: no symlink to loop3 in /dev/disk/by-path
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:25 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/2503527526; not ready for session (expect reconnect)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:25 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Mar  1 04:42:25 np0005634532 ceph-osd[84309]: osd.0 7 tick checking mon for new map
Mar  1 04:42:25 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.101:6800/743937274; not ready for session (expect reconnect)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:25 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: Adjusting osd_memory_target on compute-1 to  5247M
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: OSD bench result of 6087.256957 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: OSD bench result of 10461.171870 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e8 e8: 2 total, 2 up, 2 in
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274] boot
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526] boot
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e8: 2 total, 2 up, 2 in
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:42:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:42:25 np0005634532 ceph-osd[84309]: osd.0 8 state: booting -> active
Mar  1 04:42:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v36: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Mar  1 04:42:26 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] creating mgr pool
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0)
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: osd.1 [v2:192.168.122.101:6800/743937274,v1:192.168.122.101:6801/743937274] boot
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: osd.0 [v2:192.168.122.100:6802/2503527526,v1:192.168.122.100:6803/2503527526] boot
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 e9: 2 total, 2 up, 2 in
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 crush map has features 3314933000852226048, adjusting msgr requires
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 crush map has features 288514051259236352, adjusting msgr requires
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e9: 2 total, 2 up, 2 in
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0)
Mar  1 04:42:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:42:26 np0005634532 ceph-osd[84309]: osd.0 9 crush map has features 288514051259236352, adjusting msgr requires for clients
Mar  1 04:42:26 np0005634532 ceph-osd[84309]: osd.0 9 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Mar  1 04:42:26 np0005634532 ceph-osd[84309]: osd.0 9 crush map has features 3314933000852226048, adjusting msgr requires for osds
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e10 e10: 2 total, 2 up, 2 in
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e10: 2 total, 2 up, 2 in
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Mar  1 04:42:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:42:28 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] creating main.db for devicehealth
Mar  1 04:42:28 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:42:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v39: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 e11: 2 total, 2 up, 2 in
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Mar  1 04:42:28 np0005634532 ceph-mon[75825]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Mar  1 04:42:29 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ebwufc(active, since 76s)
Mar  1 04:42:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v41: 1 pgs: 1 unknown; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v42: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v43: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v44: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v45: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:42:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:42:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:42:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:42:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v46: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v47: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev b068c79f-5f80-4d7e-9198-036970fee4a8 (Updating mon deployment (+2 -> 3))
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-2 on compute-2
Mar  1 04:42:40 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-2 on compute-2
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: Deploying daemon mon.compute-2 on compute-2
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Mar  1 04:42:41 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Cluster is now healthy
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:42:42 np0005634532 ceph-mon[75825]: Health check cleared: CEPHADM_APPLY_SPEC_FAIL (was: Failed to apply 2 service(s): mon,mgr)
Mar  1 04:42:42 np0005634532 ceph-mon[75825]: Cluster is now healthy
Mar  1 04:42:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v48: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mon.compute-1 on compute-1
Mar  1 04:42:43 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mon.compute-1 on compute-1
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1  adding peer [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to list of hints
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).monmap v1 adding/updating compute-2 at [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] to monitor cluster
Mar  1 04:42:43 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (2) No such file or directory
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(probing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:43 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: paxos.0).electionLogic(5) init, last seen epoch 5, mid-election, bumping
Mar  1 04:42:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:44 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:44 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v49: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:45 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:45 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Mar  1 04:42:45 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:45 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Mar  1 04:42:46 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:46 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v50: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:46 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:46 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Mar  1 04:42:47 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Mar  1 04:42:47 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-2: (22) Invalid argument
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: paxos.0).electionLogic(7) init, last seen epoch 7, mid-election, bumping
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v51: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:48 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:48 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : monmap epoch 2
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : last_changed 2026-03-01T09:42:43.822120+0000
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : created 2026-03-01T09:40:50.920361+0000
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : election_strategy: 1
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap 
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ebwufc(active, since 96s)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : overall HEALTH_OK
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: Deploying daemon mon.compute-1 on compute-1
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0 calling monitor election
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-2 calling monitor election
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0 is new leader, mons compute-0,compute-2 in quorum (ranks 0,1)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: overall HEALTH_OK
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev b068c79f-5f80-4d7e-9198-036970fee4a8 (Updating mon deployment (+2 -> 3))
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event b068c79f-5f80-4d7e-9198-036970fee4a8 (Updating mon deployment (+2 -> 3)) in 8 seconds
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 1e930eeb-a8dc-465d-9135-0418f5d61ffe (Updating mgr deployment (+2 -> 3))
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-2 192.168.122.102:0/3041926517; not ready for session (expect reconnect)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2  adding peer [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to list of hints
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).monmap v2 adding/updating compute-1 at [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] to monitor cluster
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e2 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:49 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (2) No such file or directory
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(probing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : mon.compute-0 calling monitor election
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: paxos.0).electionLogic(10) init, last seen epoch 10
Mar  1 04:42:50 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:42:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:42:50.829+0000 7f30d58a3640 -1 mgr.server handle_report got status from non-daemon mon.compute-2
Mar  1 04:42:50 np0005634532 ceph-mgr[76134]: mgr.server handle_report got status from non-daemon mon.compute-2
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v52: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:50 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:50 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:51 np0005634532 python3[86560]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:51 np0005634532 podman[86562]: 2026-03-01 09:42:51.222354327 +0000 UTC m=+0.065189632 container create 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:51 np0005634532 podman[86562]: 2026-03-01 09:42:51.194980966 +0000 UTC m=+0.037816331 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:51 np0005634532 systemd[1]: Started libpod-conmon-998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733.scope.
Mar  1 04:42:51 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2955f19e90059a786ec8726d79a2d92ca974aff57d62c6c684a22ff3678872/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2955f19e90059a786ec8726d79a2d92ca974aff57d62c6c684a22ff3678872/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a2955f19e90059a786ec8726d79a2d92ca974aff57d62c6c684a22ff3678872/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:51 np0005634532 podman[86562]: 2026-03-01 09:42:51.369633438 +0000 UTC m=+0.212468693 container init 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:42:51 np0005634532 podman[86562]: 2026-03-01 09:42:51.375385421 +0000 UTC m=+0.218220696 container start 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 04:42:51 np0005634532 podman[86562]: 2026-03-01 09:42:51.420102313 +0000 UTC m=+0.262937618 container attach 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:51 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:51 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:52 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 3 completed events
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v53: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:52 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:52 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:53 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:53 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_auth_request failed to assign global_id
Mar  1 04:42:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v54: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:54 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:54 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for mon.compute-1: (22) Invalid argument
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: paxos.0).electionLogic(11) init, last seen epoch 11, mid-election, bumping
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(electing) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : monmap epoch 3
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsid 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : last_changed 2026-03-01T09:42:49.953069+0000
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : created 2026-03-01T09:40:50.920361+0000
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : min_mon_release 19 (squid)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : election_strategy: 1
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 0: [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon.compute-0
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 1: [v2:192.168.122.102:3300/0,v1:192.168.122.102:6789/0] mon.compute-2
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : 2: [v2:192.168.122.101:3300/0,v1:192.168.122.101:6789/0] mon.compute-1
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e11: 2 total, 2 up, 2 in
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.ebwufc(active, since 102s)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : overall HEALTH_OK
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-1.uyojxx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.uyojxx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.uyojxx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:55 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-1.uyojxx on compute-1
Mar  1 04:42:55 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-1.uyojxx on compute-1
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: Deploying daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0 calling monitor election
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-2 calling monitor election
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-1 calling monitor election
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0 is new leader, mons compute-0,compute-2,compute-1 in quorum (ranks 0,1,2)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: overall HEALTH_OK
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3581466692' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Mar  1 04:42:55 np0005634532 compassionate_beaver[86578]: 
Mar  1 04:42:55 np0005634532 compassionate_beaver[86578]: {"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":0,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":11,"num_osds":2,"num_up_osds":2,"osd_up_since":1772358145,"num_in_osds":2,"osd_in_since":1772358128,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":1}],"num_pgs":1,"num_pools":1,"num_objects":2,"data_bytes":459280,"bytes_used":894631936,"bytes_avail":42046652416,"bytes_total":42941284352},"fsmap":{"epoch":1,"btime":"2026-03-01T09:40:52:961395+0000","by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2026-03-01T09:42:14.169238+0000","services":{}},"progress_events":{"1e930eeb-a8dc-465d-9135-0418f5d61ffe":{"message":"Updating mgr deployment (+2 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Mar  1 04:42:55 np0005634532 systemd[1]: libpod-998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733.scope: Deactivated successfully.
Mar  1 04:42:55 np0005634532 podman[86562]: 2026-03-01 09:42:55.469178865 +0000 UTC m=+4.312014130 container died 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5a2955f19e90059a786ec8726d79a2d92ca974aff57d62c6c684a22ff3678872-merged.mount: Deactivated successfully.
Mar  1 04:42:55 np0005634532 podman[86562]: 2026-03-01 09:42:55.513729163 +0000 UTC m=+4.356564468 container remove 998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733 (image=quay.io/ceph/ceph:v19, name=compassionate_beaver, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:55 np0005634532 systemd[1]: libpod-conmon-998faed06741ad941290d1706be466287876e276494032ac1af0201f397b7733.scope: Deactivated successfully.
Mar  1 04:42:55 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mon.compute-1 192.168.122.101:0/3372533452; not ready for session (expect reconnect)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:42:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:42:56 np0005634532 python3[86642]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:56 np0005634532 podman[86643]: 2026-03-01 09:42:56.079463508 +0000 UTC m=+0.048798114 container create c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:42:56 np0005634532 systemd[1]: Started libpod-conmon-c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e.scope.
Mar  1 04:42:56 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:56 np0005634532 podman[86643]: 2026-03-01 09:42:56.055432281 +0000 UTC m=+0.024766917 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed806ef9a2f3f13a1e99e312c8b2f6243ffcb6d434ebf24b177032aee5903340/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed806ef9a2f3f13a1e99e312c8b2f6243ffcb6d434ebf24b177032aee5903340/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.uyojxx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-1.uyojxx", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: Deploying daemon mgr.compute-1.uyojxx on compute-1
Mar  1 04:42:56 np0005634532 podman[86643]: 2026-03-01 09:42:56.174681646 +0000 UTC m=+0.144016312 container init c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:56 np0005634532 podman[86643]: 2026-03-01 09:42:56.184355626 +0000 UTC m=+0.153690222 container start c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 04:42:56 np0005634532 podman[86643]: 2026-03-01 09:42:56.188043168 +0000 UTC m=+0.157377764 container attach c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1897505897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 1e930eeb-a8dc-465d-9135-0418f5d61ffe (Updating mgr deployment (+2 -> 3))
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 1e930eeb-a8dc-465d-9135-0418f5d61ffe (Updating mgr deployment (+2 -> 3)) in 8 seconds
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 1ed3c721-60e2-4685-bf0d-399ee1770a09 (Updating crash deployment (+1 -> 3))
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-2 on compute-2
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-2 on compute-2
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v55: 1 pgs: 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:42:56.955+0000 7f30d58a3640 -1 mgr.server handle_report got status from non-daemon mon.compute-1
Mar  1 04:42:56 np0005634532 ceph-mgr[76134]: mgr.server handle_report got status from non-daemon mon.compute-1
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1897505897' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-2", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1897505897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e12 e12: 2 total, 2 up, 2 in
Mar  1 04:42:57 np0005634532 suspicious_napier[86658]: pool 'vms' created
Mar  1 04:42:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e12: 2 total, 2 up, 2 in
Mar  1 04:42:57 np0005634532 systemd[1]: libpod-c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e.scope: Deactivated successfully.
Mar  1 04:42:57 np0005634532 podman[86643]: 2026-03-01 09:42:57.210282893 +0000 UTC m=+1.179617539 container died c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:42:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ed806ef9a2f3f13a1e99e312c8b2f6243ffcb6d434ebf24b177032aee5903340-merged.mount: Deactivated successfully.
Mar  1 04:42:57 np0005634532 podman[86643]: 2026-03-01 09:42:57.258572283 +0000 UTC m=+1.227906849 container remove c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e (image=quay.io/ceph/ceph:v19, name=suspicious_napier, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:42:57 np0005634532 systemd[1]: libpod-conmon-c035f5f766b1800d41bb9a04f2f4496e75b6170352d7e21ee5c0c90fb1ce808e.scope: Deactivated successfully.
Mar  1 04:42:57 np0005634532 python3[86721]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:57 np0005634532 podman[86722]: 2026-03-01 09:42:57.681869998 +0000 UTC m=+0.065216172 container create 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:42:57 np0005634532 systemd[1]: Started libpod-conmon-8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06.scope.
Mar  1 04:42:57 np0005634532 podman[86722]: 2026-03-01 09:42:57.651326799 +0000 UTC m=+0.034673063 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbffe138c2755166576ae9689ad8d58122a3b45f40796b19bdda931a4dcd63f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4dbffe138c2755166576ae9689ad8d58122a3b45f40796b19bdda931a4dcd63f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:57 np0005634532 podman[86722]: 2026-03-01 09:42:57.776912371 +0000 UTC m=+0.160258545 container init 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:42:57 np0005634532 podman[86722]: 2026-03-01 09:42:57.781103035 +0000 UTC m=+0.164449219 container start 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:57 np0005634532 podman[86722]: 2026-03-01 09:42:57.785056924 +0000 UTC m=+0.168403108 container attach 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3454050570' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: Deploying daemon crash.compute-2 on compute-2
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1897505897' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3454050570' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3454050570' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e13 e13: 2 total, 2 up, 2 in
Mar  1 04:42:58 np0005634532 jolly_curie[86737]: pool 'volumes' created
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e13: 2 total, 2 up, 2 in
Mar  1 04:42:58 np0005634532 systemd[1]: libpod-8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06.scope: Deactivated successfully.
Mar  1 04:42:58 np0005634532 podman[86722]: 2026-03-01 09:42:58.234814036 +0000 UTC m=+0.618160210 container died 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:58 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4dbffe138c2755166576ae9689ad8d58122a3b45f40796b19bdda931a4dcd63f-merged.mount: Deactivated successfully.
Mar  1 04:42:58 np0005634532 podman[86722]: 2026-03-01 09:42:58.279533558 +0000 UTC m=+0.662879732 container remove 8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06 (image=quay.io/ceph/ceph:v19, name=jolly_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:42:58 np0005634532 systemd[1]: libpod-conmon-8bd3e40ae7f241b28ef19bb6186975924450e0664818109d0fef68fef3a5db06.scope: Deactivated successfully.
Mar  1 04:42:58 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 13 pg[3.0( empty local-lis/les=0/0 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:58 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 1ed3c721-60e2-4685-bf0d-399ee1770a09 (Updating crash deployment (+1 -> 3))
Mar  1 04:42:58 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 1ed3c721-60e2-4685-bf0d-399ee1770a09 (Updating crash deployment (+1 -> 3)) in 2 seconds
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:58 np0005634532 python3[86803]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:42:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:42:58 np0005634532 podman[86804]: 2026-03-01 09:42:58.700293059 +0000 UTC m=+0.055040589 container create ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:42:58 np0005634532 systemd[1]: Started libpod-conmon-ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c.scope.
Mar  1 04:42:58 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:58 np0005634532 podman[86804]: 2026-03-01 09:42:58.676878017 +0000 UTC m=+0.031625617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0782f860c7dce05a97ace563c23efb9d646181fcc76e071a26142f2a4f6584fa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0782f860c7dce05a97ace563c23efb9d646181fcc76e071a26142f2a4f6584fa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:58 np0005634532 podman[86804]: 2026-03-01 09:42:58.78879804 +0000 UTC m=+0.143545660 container init ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 04:42:58 np0005634532 podman[86804]: 2026-03-01 09:42:58.795772113 +0000 UTC m=+0.150519673 container start ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:58 np0005634532 podman[86804]: 2026-03-01 09:42:58.799761142 +0000 UTC m=+0.154508682 container attach ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:42:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v58: 3 pgs: 2 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.148393131 +0000 UTC m=+0.042565670 container create f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3412920439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:59 np0005634532 systemd[1]: Started libpod-conmon-f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8.scope.
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Mar  1 04:42:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3454050570' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3412920439' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.13026481 +0000 UTC m=+0.024437449 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.227409245 +0000 UTC m=+0.121581784 container init f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3412920439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e14 e14: 2 total, 2 up, 2 in
Mar  1 04:42:59 np0005634532 hardcore_bhabha[86851]: pool 'backups' created
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.240207273 +0000 UTC m=+0.134379852 container start f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e14: 2 total, 2 up, 2 in
Mar  1 04:42:59 np0005634532 gifted_bose[86951]: 167 167
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8.scope: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.246064349 +0000 UTC m=+0.140236928 container attach f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:42:59 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 14 pg[4.0( empty local-lis/les=0/0 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.246859749 +0000 UTC m=+0.141032308 container died f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:42:59 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 14 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=0/0 les/c/f=0/0/0 sis=13) [0] r=0 lpr=13 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c.scope: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86804]: 2026-03-01 09:42:59.263277517 +0000 UTC m=+0.618025087 container died ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:42:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0782f860c7dce05a97ace563c23efb9d646181fcc76e071a26142f2a4f6584fa-merged.mount: Deactivated successfully.
Mar  1 04:42:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-aac02bd46f174c82927f143673926c8e9bc5b8ac7d2046ec970f4e08b049b88f-merged.mount: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86804]: 2026-03-01 09:42:59.313799973 +0000 UTC m=+0.668547513 container remove ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c (image=quay.io/ceph/ceph:v19, name=hardcore_bhabha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:42:59 np0005634532 podman[86932]: 2026-03-01 09:42:59.320069589 +0000 UTC m=+0.214242128 container remove f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_bose, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-conmon-f57b6ad88b38a42c5994471ab55cff0db77719c22def70221276a340357bc9b8.scope: Deactivated successfully.
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-conmon-ba682cd57d47a6f9307653f61c61fb1ccfd6b062b5bff31baff940028b45ff2c.scope: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.467543526 +0000 UTC m=+0.050399335 container create f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 04:42:59 np0005634532 systemd[1]: Started libpod-conmon-f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777.scope.
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.447521118 +0000 UTC m=+0.030376887 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:42:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.571742346 +0000 UTC m=+0.154598165 container init f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.585939209 +0000 UTC m=+0.168794998 container start f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.59118198 +0000 UTC m=+0.174037819 container attach f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:42:59 np0005634532 python3[87025]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:42:59 np0005634532 podman[87033]: 2026-03-01 09:42:59.71427672 +0000 UTC m=+0.054793303 container create e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:42:59 np0005634532 systemd[1]: Started libpod-conmon-e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e.scope.
Mar  1 04:42:59 np0005634532 podman[87033]: 2026-03-01 09:42:59.685247298 +0000 UTC m=+0.025763931 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:42:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd52083505dae3a3f5d0af2209ce84704692f56a608a7e7181cd9048aad421f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bd52083505dae3a3f5d0af2209ce84704692f56a608a7e7181cd9048aad421f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:42:59 np0005634532 podman[87033]: 2026-03-01 09:42:59.806301818 +0000 UTC m=+0.146818421 container init e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:59 np0005634532 podman[87033]: 2026-03-01 09:42:59.810681377 +0000 UTC m=+0.151197960 container start e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:42:59 np0005634532 podman[87033]: 2026-03-01 09:42:59.814638305 +0000 UTC m=+0.155154948 container attach e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:42:59 np0005634532 happy_hoover[87028]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:42:59 np0005634532 happy_hoover[87028]: --> All data devices are unavailable
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777.scope: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.906732015 +0000 UTC m=+0.489587804 container died f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:42:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2ef0b65c43b5c724457532354ebebe643eb1a95f3b4bd8b45c7415be48a04f0e-merged.mount: Deactivated successfully.
Mar  1 04:42:59 np0005634532 podman[86986]: 2026-03-01 09:42:59.958906162 +0000 UTC m=+0.541761911 container remove f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_hoover, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:42:59 np0005634532 systemd[1]: libpod-conmon-f3b2b6d83cbb1a42b50b2f73e9bc8b55162bca3bf205c4dabf31432c7394b777.scope: Deactivated successfully.
Mar  1 04:43:00 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 5 completed events
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e14 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/919029259' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3412920439' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: Health check failed: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/919029259' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/919029259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e15 e15: 2 total, 2 up, 2 in
Mar  1 04:43:00 np0005634532 inspiring_archimedes[87049]: pool 'images' created
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e15: 2 total, 2 up, 2 in
Mar  1 04:43:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 15 pg[5.0( empty local-lis/les=0/0 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 15 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=0/0 les/c/f=0/0/0 sis=14) [0] r=0 lpr=14 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:00 np0005634532 systemd[1]: libpod-e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e.scope: Deactivated successfully.
Mar  1 04:43:00 np0005634532 podman[87033]: 2026-03-01 09:43:00.287339998 +0000 UTC m=+0.627856571 container died e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6bd52083505dae3a3f5d0af2209ce84704692f56a608a7e7181cd9048aad421f-merged.mount: Deactivated successfully.
Mar  1 04:43:00 np0005634532 podman[87033]: 2026-03-01 09:43:00.339642919 +0000 UTC m=+0.680159502 container remove e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e (image=quay.io/ceph/ceph:v19, name=inspiring_archimedes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"} v 0)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"}]: dispatch
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Mar  1 04:43:00 np0005634532 systemd[1]: libpod-conmon-e201ffaa7412331e99436825098a078227a3be8ac3aa6f38080149074ead461e.scope: Deactivated successfully.
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"}]': finished
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:00 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 16 pg[5.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=0/0 les/c/f=0/0/0 sis=15) [0] r=0 lpr=15 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.517960832 +0000 UTC m=+0.033616157 container create 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:43:00 np0005634532 systemd[1]: Started libpod-conmon-4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636.scope.
Mar  1 04:43:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.501836741 +0000 UTC m=+0.017492116 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.600916685 +0000 UTC m=+0.116572050 container init 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.607132679 +0000 UTC m=+0.122788024 container start 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.610432951 +0000 UTC m=+0.126088356 container attach 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:43:00 np0005634532 jovial_engelbart[87242]: 167 167
Mar  1 04:43:00 np0005634532 systemd[1]: libpod-4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636.scope: Deactivated successfully.
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.613857146 +0000 UTC m=+0.129512501 container died 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:43:00 np0005634532 python3[87226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1b54a827ac38114ce17f957922860fe1bdb272178369966cbc8b89d1bd760b39-merged.mount: Deactivated successfully.
Mar  1 04:43:00 np0005634532 podman[87224]: 2026-03-01 09:43:00.653754498 +0000 UTC m=+0.169409833 container remove 4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_engelbart, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:00 np0005634532 systemd[1]: libpod-conmon-4c2acd0d7b165df0d00b8bfbf0205a25c68515ff68f168d74d03bd374a8b2636.scope: Deactivated successfully.
Mar  1 04:43:00 np0005634532 podman[87254]: 2026-03-01 09:43:00.711101614 +0000 UTC m=+0.057556022 container create 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj started
Mar  1 04:43:00 np0005634532 systemd[1]: Started libpod-conmon-94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea.scope.
Mar  1 04:43:00 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mgr.compute-2.dikzlj 192.168.122.102:0/3081575608; not ready for session (expect reconnect)
Mar  1 04:43:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd004f688105813cfab5bfa6f59235df62c47602edcf5cce55b7809d99b96a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd004f688105813cfab5bfa6f59235df62c47602edcf5cce55b7809d99b96a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 podman[87254]: 2026-03-01 09:43:00.687033526 +0000 UTC m=+0.033488014 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:00 np0005634532 podman[87281]: 2026-03-01 09:43:00.795744668 +0000 UTC m=+0.048356933 container create 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:43:00 np0005634532 podman[87254]: 2026-03-01 09:43:00.8255855 +0000 UTC m=+0.172039928 container init 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:00 np0005634532 podman[87254]: 2026-03-01 09:43:00.831945368 +0000 UTC m=+0.178399806 container start 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:43:00 np0005634532 podman[87254]: 2026-03-01 09:43:00.835886576 +0000 UTC m=+0.182340974 container attach 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:00 np0005634532 systemd[1]: Started libpod-conmon-97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a.scope.
Mar  1 04:43:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa85a2d3364450d91cb0134ede3473ba4203bb1d81ec5043d79db31157304d74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa85a2d3364450d91cb0134ede3473ba4203bb1d81ec5043d79db31157304d74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa85a2d3364450d91cb0134ede3473ba4203bb1d81ec5043d79db31157304d74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa85a2d3364450d91cb0134ede3473ba4203bb1d81ec5043d79db31157304d74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:00 np0005634532 podman[87281]: 2026-03-01 09:43:00.772391117 +0000 UTC m=+0.025003422 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:00 np0005634532 podman[87281]: 2026-03-01 09:43:00.876355752 +0000 UTC m=+0.128968037 container init 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:00 np0005634532 podman[87281]: 2026-03-01 09:43:00.883223583 +0000 UTC m=+0.135835858 container start 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:43:00 np0005634532 podman[87281]: 2026-03-01 09:43:00.886944565 +0000 UTC m=+0.139556850 container attach 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v62: 5 pgs: 4 unknown, 1 active+clean; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]: {
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:    "0": [
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:        {
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "devices": [
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "/dev/loop3"
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            ],
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "lv_name": "ceph_lv0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "lv_size": "21470642176",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "name": "ceph_lv0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "tags": {
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.cluster_name": "ceph",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.crush_device_class": "",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.encrypted": "0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.osd_id": "0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.type": "block",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.vdo": "0",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:                "ceph.with_tpm": "0"
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            },
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "type": "block",
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:            "vg_name": "ceph_vg0"
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:        }
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]:    ]
Mar  1 04:43:01 np0005634532 naughty_agnesi[87301]: }
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 podman[87281]: 2026-03-01 09:43:01.140598922 +0000 UTC m=+0.393211197 container died 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:01 np0005634532 podman[87281]: 2026-03-01 09:43:01.180645878 +0000 UTC m=+0.433258143 container remove 97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4083798944' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-conmon-97ab83c2ec0dffdf9bd048bb9aa1ff8297536ed7b0cc74fa8b0bcd18e827fe3a.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/919029259' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/3236949622' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"}]: dispatch
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"}]: dispatch
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "744a32b2-e29a-4a4a-aa21-93a26beb3b80"}]': finished
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4083798944' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fa85a2d3364450d91cb0134ede3473ba4203bb1d81ec5043d79db31157304d74-merged.mount: Deactivated successfully.
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4083798944' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:01 np0005634532 pensive_moser[87287]: pool 'cephfs.cephfs.meta' created
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e17 e17: 3 total, 2 up, 3 in
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 2 up, 3 in
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e10: compute-0.ebwufc(active, since 108s), standbys: compute-2.dikzlj
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"} v 0)
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"}]: dispatch
Mar  1 04:43:01 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 podman[87254]: 2026-03-01 09:43:01.410642856 +0000 UTC m=+0.757097264 container died 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:43:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ebd004f688105813cfab5bfa6f59235df62c47602edcf5cce55b7809d99b96a5-merged.mount: Deactivated successfully.
Mar  1 04:43:01 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 17 pg[6.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:01 np0005634532 podman[87254]: 2026-03-01 09:43:01.457103501 +0000 UTC m=+0.803557929 container remove 94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea (image=quay.io/ceph/ceph:v19, name=pensive_moser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-conmon-94c706800462e7cb59b92a9293ab1a64ec4c460019234350d9aebcea27b48cea.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 python3[87452]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.772820151 +0000 UTC m=+0.057530731 container create 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:01 np0005634532 systemd[1]: Started libpod-conmon-9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2.scope.
Mar  1 04:43:01 np0005634532 podman[87484]: 2026-03-01 09:43:01.811896233 +0000 UTC m=+0.059995443 container create 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:01 np0005634532 systemd[1]: Started libpod-conmon-9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf.scope.
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.750107436 +0000 UTC m=+0.034818036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.847709793 +0000 UTC m=+0.132420313 container init 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:43:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b88012729d23fee40eb08c85a2dc32758ab8bb6b700f71c52de900aeb922aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6b88012729d23fee40eb08c85a2dc32758ab8bb6b700f71c52de900aeb922aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.859007784 +0000 UTC m=+0.143718284 container start 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.862717716 +0000 UTC m=+0.147428306 container attach 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:01 np0005634532 boring_bose[87502]: 167 167
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.866276365 +0000 UTC m=+0.150986865 container died 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:43:01 np0005634532 podman[87484]: 2026-03-01 09:43:01.866457119 +0000 UTC m=+0.114556369 container init 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:01 np0005634532 podman[87484]: 2026-03-01 09:43:01.871233098 +0000 UTC m=+0.119332298 container start 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:43:01 np0005634532 podman[87484]: 2026-03-01 09:43:01.876038317 +0000 UTC m=+0.124137547 container attach 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 04:43:01 np0005634532 podman[87484]: 2026-03-01 09:43:01.790609953 +0000 UTC m=+0.038709213 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1b02545f733481a93f1527845d68e11d7f7dc4959337a64519f774c1d0cfed3d-merged.mount: Deactivated successfully.
Mar  1 04:43:01 np0005634532 podman[87472]: 2026-03-01 09:43:01.898503416 +0000 UTC m=+0.183213916 container remove 9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_bose, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:43:01 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx started
Mar  1 04:43:01 np0005634532 systemd[1]: libpod-conmon-9c28b31e9887e5b7671bd2448d057da9bb97d61d1ab991e9ec7e85e53d4ab5f2.scope: Deactivated successfully.
Mar  1 04:43:01 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from mgr.compute-1.uyojxx 192.168.122.101:0/1704983407; not ready for session (expect reconnect)
Mar  1 04:43:02 np0005634532 podman[87542]: 2026-03-01 09:43:02.035174774 +0000 UTC m=+0.048366853 container create 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:02 np0005634532 systemd[1]: Started libpod-conmon-39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9.scope.
Mar  1 04:43:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d27c3db938d3e10ef9cf95d414dbc8fd112b94e93dce8f167e7d52f16e9ee3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d27c3db938d3e10ef9cf95d414dbc8fd112b94e93dce8f167e7d52f16e9ee3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d27c3db938d3e10ef9cf95d414dbc8fd112b94e93dce8f167e7d52f16e9ee3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d27c3db938d3e10ef9cf95d414dbc8fd112b94e93dce8f167e7d52f16e9ee3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 podman[87542]: 2026-03-01 09:43:02.011407813 +0000 UTC m=+0.024599922 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:02 np0005634532 podman[87542]: 2026-03-01 09:43:02.116896096 +0000 UTC m=+0.130088285 container init 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Mar  1 04:43:02 np0005634532 podman[87542]: 2026-03-01 09:43:02.122389713 +0000 UTC m=+0.135581832 container start 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:43:02 np0005634532 podman[87542]: 2026-03-01 09:43:02.125858109 +0000 UTC m=+0.139050238 container attach 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0)
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/450763826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4083798944' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/450763826' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/450763826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e18 e18: 3 total, 2 up, 3 in
Mar  1 04:43:02 np0005634532 dreamy_wright[87507]: pool 'cephfs.cephfs.data' created
Mar  1 04:43:02 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 18 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [0] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 2 up, 3 in
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e11: compute-0.ebwufc(active, since 109s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"} v 0)
Mar  1 04:43:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"}]: dispatch
Mar  1 04:43:02 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:02 np0005634532 systemd[1]: libpod-9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf.scope: Deactivated successfully.
Mar  1 04:43:02 np0005634532 podman[87484]: 2026-03-01 09:43:02.461959215 +0000 UTC m=+0.710058455 container died 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c6b88012729d23fee40eb08c85a2dc32758ab8bb6b700f71c52de900aeb922aa-merged.mount: Deactivated successfully.
Mar  1 04:43:02 np0005634532 podman[87484]: 2026-03-01 09:43:02.507117928 +0000 UTC m=+0.755217168 container remove 9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf (image=quay.io/ceph/ceph:v19, name=dreamy_wright, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:43:02 np0005634532 systemd[1]: libpod-conmon-9696b48329575386e095e2cb67db4cf196a6ba37e9b53e295adb76abe3fb1aaf.scope: Deactivated successfully.
Mar  1 04:43:02 np0005634532 lvm[87683]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:43:02 np0005634532 lvm[87683]: VG ceph_vg0 finished
Mar  1 04:43:02 np0005634532 python3[87675]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:02 np0005634532 gracious_thompson[87568]: {}
Mar  1 04:43:02 np0005634532 systemd[1]: libpod-39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9.scope: Deactivated successfully.
Mar  1 04:43:02 np0005634532 systemd[1]: libpod-39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9.scope: Consumed 1.016s CPU time.
Mar  1 04:43:02 np0005634532 podman[87687]: 2026-03-01 09:43:02.893549686 +0000 UTC m=+0.060002303 container create b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:02 np0005634532 podman[87700]: 2026-03-01 09:43:02.920583608 +0000 UTC m=+0.029435953 container died 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 04:43:02 np0005634532 systemd[1]: Started libpod-conmon-b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f.scope.
Mar  1 04:43:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-87d27c3db938d3e10ef9cf95d414dbc8fd112b94e93dce8f167e7d52f16e9ee3-merged.mount: Deactivated successfully.
Mar  1 04:43:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:02 np0005634532 podman[87700]: 2026-03-01 09:43:02.960011448 +0000 UTC m=+0.068863793 container remove 39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gracious_thompson, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 04:43:02 np0005634532 podman[87687]: 2026-03-01 09:43:02.871463407 +0000 UTC m=+0.037916044 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:02 np0005634532 systemd[1]: libpod-conmon-39dadbac52ebf45913808adce2a620fba13fc955f49c1d466f9fbd9b8b8967a9.scope: Deactivated successfully.
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79be295c1a7970d5e1b6c2e3e9cf49903d5143573fa622e064c5de0612802a6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a79be295c1a7970d5e1b6c2e3e9cf49903d5143573fa622e064c5de0612802a6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:02 np0005634532 podman[87687]: 2026-03-01 09:43:02.994367993 +0000 UTC m=+0.160820690 container init b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:03 np0005634532 podman[87687]: 2026-03-01 09:43:03.001765827 +0000 UTC m=+0.168218434 container start b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:03 np0005634532 podman[87687]: 2026-03-01 09:43:03.005988132 +0000 UTC m=+0.172440779 container attach b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0)
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3605126821' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3605126821' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e19 e19: 3 total, 2 up, 3 in
Mar  1 04:43:03 np0005634532 hopeful_hawking[87716]: enabled application 'rbd' on pool 'vms'
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 2 up, 3 in
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:03 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/450763826' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:03 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3605126821' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Mar  1 04:43:03 np0005634532 systemd[1]: libpod-b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f.scope: Deactivated successfully.
Mar  1 04:43:03 np0005634532 podman[87687]: 2026-03-01 09:43:03.466084111 +0000 UTC m=+0.632536758 container died b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:43:03 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a79be295c1a7970d5e1b6c2e3e9cf49903d5143573fa622e064c5de0612802a6-merged.mount: Deactivated successfully.
Mar  1 04:43:03 np0005634532 podman[87687]: 2026-03-01 09:43:03.513026918 +0000 UTC m=+0.679479575 container remove b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f (image=quay.io/ceph/ceph:v19, name=hopeful_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:43:03 np0005634532 systemd[1]: libpod-conmon-b24a855086559dc5856704fa7748df6fd66de30a51ff727e9b879e5d1feb086f.scope: Deactivated successfully.
Mar  1 04:43:03 np0005634532 python3[87780]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:03 np0005634532 podman[87781]: 2026-03-01 09:43:03.875979382 +0000 UTC m=+0.053756047 container create 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:03 np0005634532 systemd[1]: Started libpod-conmon-1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5.scope.
Mar  1 04:43:03 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:03 np0005634532 podman[87781]: 2026-03-01 09:43:03.852033677 +0000 UTC m=+0.029810362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c91e4112215ab46043d1d9b9c5fa79f51259d631cc3693261cfa1601693668/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4c91e4112215ab46043d1d9b9c5fa79f51259d631cc3693261cfa1601693668/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:03 np0005634532 podman[87781]: 2026-03-01 09:43:03.969870597 +0000 UTC m=+0.147647322 container init 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:43:03 np0005634532 podman[87781]: 2026-03-01 09:43:03.977600439 +0000 UTC m=+0.155377104 container start 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:03 np0005634532 podman[87781]: 2026-03-01 09:43:03.981283451 +0000 UTC m=+0.159060186 container attach 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0)
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3663035535' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3663035535' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e20 e20: 3 total, 2 up, 3 in
Mar  1 04:43:04 np0005634532 inspiring_payne[87796]: enabled application 'rbd' on pool 'volumes'
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 2 up, 3 in
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:04 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:04 np0005634532 systemd[1]: libpod-1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5.scope: Deactivated successfully.
Mar  1 04:43:04 np0005634532 podman[87781]: 2026-03-01 09:43:04.505842262 +0000 UTC m=+0.683618897 container died 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:04 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c4c91e4112215ab46043d1d9b9c5fa79f51259d631cc3693261cfa1601693668-merged.mount: Deactivated successfully.
Mar  1 04:43:04 np0005634532 podman[87781]: 2026-03-01 09:43:04.537381726 +0000 UTC m=+0.715158371 container remove 1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5 (image=quay.io/ceph/ceph:v19, name=inspiring_payne, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:43:04 np0005634532 systemd[1]: libpod-conmon-1e6fc9131c2ba124842281db56a41ad00b6ff4d2d9755c68156a276b3baccbc5.scope: Deactivated successfully.
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3605126821' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Mar  1 04:43:04 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3663035535' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Mar  1 04:43:04 np0005634532 python3[87857]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:04 np0005634532 podman[87858]: 2026-03-01 09:43:04.890541027 +0000 UTC m=+0.055076331 container create 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:04 np0005634532 systemd[1]: Started libpod-conmon-9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b.scope.
Mar  1 04:43:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v68: 7 pgs: 1 unknown, 1 creating+peering, 5 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:04 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94665173edff0eaf28f9fe16a72f5a3766fc8498b50b66fc2af6331a285c31b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a94665173edff0eaf28f9fe16a72f5a3766fc8498b50b66fc2af6331a285c31b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:04 np0005634532 podman[87858]: 2026-03-01 09:43:04.866900019 +0000 UTC m=+0.031435383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:04 np0005634532 podman[87858]: 2026-03-01 09:43:04.974615897 +0000 UTC m=+0.139151211 container init 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:43:04 np0005634532 podman[87858]: 2026-03-01 09:43:04.981293403 +0000 UTC m=+0.145828707 container start 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:43:04 np0005634532 podman[87858]: 2026-03-01 09:43:04.985405605 +0000 UTC m=+0.149940939 container attach 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e20 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4099643945' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3663035535' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: Health check update: 6 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4099643945' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4099643945' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e21 e21: 3 total, 2 up, 3 in
Mar  1 04:43:05 np0005634532 elastic_bhaskara[87873]: enabled application 'rbd' on pool 'backups'
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 2 up, 3 in
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:05 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:05 np0005634532 systemd[1]: libpod-9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b.scope: Deactivated successfully.
Mar  1 04:43:05 np0005634532 podman[87858]: 2026-03-01 09:43:05.691701606 +0000 UTC m=+0.856236960 container died 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:43:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a94665173edff0eaf28f9fe16a72f5a3766fc8498b50b66fc2af6331a285c31b-merged.mount: Deactivated successfully.
Mar  1 04:43:05 np0005634532 podman[87858]: 2026-03-01 09:43:05.735222138 +0000 UTC m=+0.899757452 container remove 9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b (image=quay.io/ceph/ceph:v19, name=elastic_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:43:05 np0005634532 systemd[1]: libpod-conmon-9800aa55c7eeb75175d6fcd27726dda12d370180084a1e7091c104a76835565b.scope: Deactivated successfully.
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:05 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-2
Mar  1 04:43:05 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-2
Mar  1 04:43:06 np0005634532 python3[87935]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:06 np0005634532 podman[87936]: 2026-03-01 09:43:06.141856868 +0000 UTC m=+0.054011414 container create a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:43:06 np0005634532 systemd[1]: Started libpod-conmon-a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024.scope.
Mar  1 04:43:06 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:06 np0005634532 podman[87936]: 2026-03-01 09:43:06.116404435 +0000 UTC m=+0.028558991 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6972619c936a8a95c56a9af752290e5e5ff61f9f18c37cac3606dbefd7c2e6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f6972619c936a8a95c56a9af752290e5e5ff61f9f18c37cac3606dbefd7c2e6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:06 np0005634532 podman[87936]: 2026-03-01 09:43:06.232920122 +0000 UTC m=+0.145074668 container init a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:06 np0005634532 podman[87936]: 2026-03-01 09:43:06.240943022 +0000 UTC m=+0.153097558 container start a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:06 np0005634532 podman[87936]: 2026-03-01 09:43:06.246131031 +0000 UTC m=+0.158285637 container attach a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0)
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3840893851' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4099643945' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: Deploying daemon osd.2 on compute-2
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3840893851' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3840893851' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e22 e22: 3 total, 2 up, 3 in
Mar  1 04:43:06 np0005634532 busy_darwin[87952]: enabled application 'rbd' on pool 'images'
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 2 up, 3 in
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:06 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:06 np0005634532 systemd[1]: libpod-a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024.scope: Deactivated successfully.
Mar  1 04:43:06 np0005634532 podman[87977]: 2026-03-01 09:43:06.750791508 +0000 UTC m=+0.028421007 container died a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:06 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7f6972619c936a8a95c56a9af752290e5e5ff61f9f18c37cac3606dbefd7c2e6-merged.mount: Deactivated successfully.
Mar  1 04:43:06 np0005634532 podman[87977]: 2026-03-01 09:43:06.786291211 +0000 UTC m=+0.063920670 container remove a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024 (image=quay.io/ceph/ceph:v19, name=busy_darwin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 04:43:06 np0005634532 systemd[1]: libpod-conmon-a681f9782ffbb46f865e7866cf4590826201e427c270582bced488891b431024.scope: Deactivated successfully.
Mar  1 04:43:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:07 np0005634532 python3[88017]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.157605132 +0000 UTC m=+0.064222637 container create 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:43:07 np0005634532 systemd[1]: Started libpod-conmon-44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e.scope.
Mar  1 04:43:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece1f0d031ca024b8e216835a81221e1102c52bf4d92458fe87b89e55a98c27/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ece1f0d031ca024b8e216835a81221e1102c52bf4d92458fe87b89e55a98c27/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.135956824 +0000 UTC m=+0.042574309 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.241456358 +0000 UTC m=+0.148073863 container init 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.246012811 +0000 UTC m=+0.152630276 container start 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.24957326 +0000 UTC m=+0.156190885 container attach 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0)
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2534263255' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3840893851' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2534263255' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2534263255' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e23 e23: 3 total, 2 up, 3 in
Mar  1 04:43:07 np0005634532 sleepy_driscoll[88034]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 2 up, 3 in
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:07 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:07 np0005634532 systemd[1]: libpod-44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e.scope: Deactivated successfully.
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.748043152 +0000 UTC m=+0.654660647 container died 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:43:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2ece1f0d031ca024b8e216835a81221e1102c52bf4d92458fe87b89e55a98c27-merged.mount: Deactivated successfully.
Mar  1 04:43:07 np0005634532 podman[88018]: 2026-03-01 09:43:07.788745564 +0000 UTC m=+0.695363069 container remove 44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e (image=quay.io/ceph/ceph:v19, name=sleepy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:07 np0005634532 systemd[1]: libpod-conmon-44816c355d3727bc4cf14d7673acbbd84bcfd6c7d090333e0461c71efb30845e.scope: Deactivated successfully.
Mar  1 04:43:08 np0005634532 python3[88097]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.169878189 +0000 UTC m=+0.037916863 container create 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:43:08 np0005634532 systemd[1]: Started libpod-conmon-230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf.scope.
Mar  1 04:43:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522b50c0086c64e129fddc853dbe1397600a2bf0eda6eb16471c973b863c8bf6/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/522b50c0086c64e129fddc853dbe1397600a2bf0eda6eb16471c973b863c8bf6/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.238436884 +0000 UTC m=+0.106475538 container init 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.242949316 +0000 UTC m=+0.110987980 container start 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.246405292 +0000 UTC m=+0.114444176 container attach 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.151129063 +0000 UTC m=+0.019167727 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0)
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/603338138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/603338138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e24 e24: 3 total, 2 up, 3 in
Mar  1 04:43:08 np0005634532 jolly_euler[88113]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 2 up, 3 in
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:08 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2534263255' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Mar  1 04:43:08 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/603338138' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Mar  1 04:43:08 np0005634532 systemd[1]: libpod-230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf.scope: Deactivated successfully.
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.753890659 +0000 UTC m=+0.621929293 container died 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:43:08 np0005634532 systemd[1]: var-lib-containers-storage-overlay-522b50c0086c64e129fddc853dbe1397600a2bf0eda6eb16471c973b863c8bf6-merged.mount: Deactivated successfully.
Mar  1 04:43:08 np0005634532 podman[88098]: 2026-03-01 09:43:08.790831138 +0000 UTC m=+0.658869772 container remove 230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf (image=quay.io/ceph/ceph:v19, name=jolly_euler, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:43:08 np0005634532 systemd[1]: libpod-conmon-230d1d085b9ebfd6bc0e08063ca0f5760c4416cc76a6491d3c9527c17b720acf.scope: Deactivated successfully.
Mar  1 04:43:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:09 np0005634532 python3[88223]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Cluster is now healthy
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/603338138' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:10 np0005634532 python3[88296]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358189.3926935-37964-142043457707156/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=ad866aa1f51f395809dd7ac5cb7a56d43c167b49 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:43:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:10 np0005634532 python3[88398]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:43:10 np0005634532 ceph-mon[75825]: Health check cleared: POOL_APP_NOT_ENABLED (was: 2 pool(s) do not have an application enabled)
Mar  1 04:43:10 np0005634532 ceph-mon[75825]: Cluster is now healthy
Mar  1 04:43:10 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:10 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:10 np0005634532 python3[88473]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358190.3279574-37978-70500555748920/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=be717b39ca685bd7030014c0afc0eaf83fb6c393 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:43:11 np0005634532 python3[88523]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:11 np0005634532 podman[88524]: 2026-03-01 09:43:11.414035619 +0000 UTC m=+0.048628260 container create 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:43:11 np0005634532 systemd[1]: Started libpod-conmon-1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40.scope.
Mar  1 04:43:11 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed980d51fbac4988c9505692c9704df85aff251ceed0b37d17f2df3b279bcfd4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed980d51fbac4988c9505692c9704df85aff251ceed0b37d17f2df3b279bcfd4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ed980d51fbac4988c9505692c9704df85aff251ceed0b37d17f2df3b279bcfd4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:11 np0005634532 podman[88524]: 2026-03-01 09:43:11.388171875 +0000 UTC m=+0.022764566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:11 np0005634532 podman[88524]: 2026-03-01 09:43:11.486766906 +0000 UTC m=+0.121359537 container init 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:11 np0005634532 podman[88524]: 2026-03-01 09:43:11.492265853 +0000 UTC m=+0.126858494 container start 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:11 np0005634532 podman[88524]: 2026-03-01 09:43:11.49619003 +0000 UTC m=+0.130782741 container attach 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config assimilate-conf"} v 0)
Mar  1 04:43:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1392857042' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1392857042' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Mar  1 04:43:12 np0005634532 romantic_fermat[88539]: 
Mar  1 04:43:12 np0005634532 romantic_fermat[88539]: [global]
Mar  1 04:43:12 np0005634532 romantic_fermat[88539]: #011fsid = 437b1e74-f995-5d64-af1d-257ce01d77ab
Mar  1 04:43:12 np0005634532 romantic_fermat[88539]: #011mon_host = 192.168.122.100
Mar  1 04:43:12 np0005634532 systemd[1]: libpod-1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40.scope: Deactivated successfully.
Mar  1 04:43:12 np0005634532 podman[88524]: 2026-03-01 09:43:12.105051598 +0000 UTC m=+0.739644199 container died 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:12 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ed980d51fbac4988c9505692c9704df85aff251ceed0b37d17f2df3b279bcfd4-merged.mount: Deactivated successfully.
Mar  1 04:43:12 np0005634532 podman[88524]: 2026-03-01 09:43:12.140732155 +0000 UTC m=+0.775324756 container remove 1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40 (image=quay.io/ceph/ceph:v19, name=romantic_fermat, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:12 np0005634532 systemd[1]: libpod-conmon-1b2800a06fa736a99756618d41fc40ae97a20518bb79dadec2f351ce2b2cbb40.scope: Deactivated successfully.
Mar  1 04:43:12 np0005634532 python3[88601]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:12 np0005634532 podman[88602]: 2026-03-01 09:43:12.52745648 +0000 UTC m=+0.051769598 container create 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Mar  1 04:43:12 np0005634532 systemd[1]: Started libpod-conmon-43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741.scope.
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0)
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Mar  1 04:43:12 np0005634532 podman[88602]: 2026-03-01 09:43:12.500287295 +0000 UTC m=+0.024600443 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:12 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6fcd8d81956e2a55b5212a62aaa72296401896c0c5d11fc74fee585c1de96/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6fcd8d81956e2a55b5212a62aaa72296401896c0c5d11fc74fee585c1de96/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/19c6fcd8d81956e2a55b5212a62aaa72296401896c0c5d11fc74fee585c1de96/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:12 np0005634532 podman[88602]: 2026-03-01 09:43:12.629571869 +0000 UTC m=+0.153885037 container init 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:43:12 np0005634532 podman[88602]: 2026-03-01 09:43:12.638257045 +0000 UTC m=+0.162570113 container start 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:43:12 np0005634532 podman[88602]: 2026-03-01 09:43:12.642362767 +0000 UTC m=+0.166675865 container attach 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:43:12
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'volumes', 'images', 'cephfs.cephfs.meta', 'backups', 'vms']
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 1.0778624975581169e-05 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 42941284352
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:43:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1392857042' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1392857042' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: from='osd.2 [v2:192.168.122.102:6800/1868868591,v1:192.168.122.102:6801/1868868591]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Mar  1 04:43:12 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3836839872' entity='client.admin' 
Mar  1 04:43:13 np0005634532 great_murdock[88617]: set ssl_option
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Mar  1 04:43:13 np0005634532 systemd[1]: libpod-43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741.scope: Deactivated successfully.
Mar  1 04:43:13 np0005634532 podman[88602]: 2026-03-01 09:43:13.101714848 +0000 UTC m=+0.626027996 container died 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e25 e25: 3 total, 2 up, 3 in
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 2 up, 3 in
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 1466a821-111b-48ca-8735-ba02166ea088 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]} v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e25 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-2,root=default}
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-19c6fcd8d81956e2a55b5212a62aaa72296401896c0c5d11fc74fee585c1de96-merged.mount: Deactivated successfully.
Mar  1 04:43:13 np0005634532 podman[88602]: 2026-03-01 09:43:13.148736727 +0000 UTC m=+0.673049835 container remove 43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741 (image=quay.io/ceph/ceph:v19, name=great_murdock, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:43:13 np0005634532 systemd[1]: libpod-conmon-43065a3759460320a8975c8f3f74af28fc7aaf63aab3ed286d2d87dd6c8c3741.scope: Deactivated successfully.
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:13 np0005634532 python3[88705]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.507559229 +0000 UTC m=+0.042032446 container create 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:43:13 np0005634532 systemd[1]: Started libpod-conmon-07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037.scope.
Mar  1 04:43:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69150cc9f2fb405d7bc04289ee0fd217a165fce58f9453406d47ff8b0ec8449b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69150cc9f2fb405d7bc04289ee0fd217a165fce58f9453406d47ff8b0ec8449b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/69150cc9f2fb405d7bc04289ee0fd217a165fce58f9453406d47ff8b0ec8449b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.488265709 +0000 UTC m=+0.022738976 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.586735997 +0000 UTC m=+0.121209274 container init 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.591581198 +0000 UTC m=+0.126054415 container start 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.594596853 +0000 UTC m=+0.129070120 container attach 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14298 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service ingress.rgw.default spec with placement count:2
Mar  1 04:43:13 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service ingress.rgw.default spec with placement count:2
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:43:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:13 np0005634532 sharp_nobel[88747]: Scheduled rgw.rgw update...
Mar  1 04:43:13 np0005634532 sharp_nobel[88747]: Scheduled ingress.rgw.default update...
Mar  1 04:43:13 np0005634532 systemd[1]: libpod-07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037.scope: Deactivated successfully.
Mar  1 04:43:13 np0005634532 podman[88708]: 2026-03-01 09:43:13.971158295 +0000 UTC m=+0.505631522 container died 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-69150cc9f2fb405d7bc04289ee0fd217a165fce58f9453406d47ff8b0ec8449b-merged.mount: Deactivated successfully.
Mar  1 04:43:14 np0005634532 podman[88708]: 2026-03-01 09:43:14.006943435 +0000 UTC m=+0.541416642 container remove 07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037 (image=quay.io/ceph/ceph:v19, name=sharp_nobel, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:43:14 np0005634532 systemd[1]: libpod-conmon-07e4e114b4117af60640f98c229fda55b8079b17aa20bed89975940a52d4b037.scope: Deactivated successfully.
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3836839872' entity='client.admin' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='osd.2 [v2:192.168.122.102:6800/1868868591,v1:192.168.122.102:6801/1868868591]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='osd.2 ' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e26 e26: 3 total, 2 up, 3 in
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 2 up, 3 in
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=9.129649162s) [] r=-1 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 active pruub 63.004920959s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:14 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 26 pg[5.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=26 pruub=10.246270180s) [] r=-1 lpr=26 pi=[15,26)/1 crt=0'0 mlcod 0'0 active pruub 64.121566772s@ mbc={}] PeeringState::start_peering_interval up [0] -> [], acting [0] -> [], acting_primary 0 -> -1, up_primary 0 -> -1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:14 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 26 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=26 pruub=9.129649162s) [] r=-1 lpr=26 pi=[13,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.004920959s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:14 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 26 pg[5.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=26 pruub=10.246270180s) [] r=-1 lpr=26 pi=[15,26)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.121566772s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:14 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev e0e45573-9762-4004-b8e5-ad7b70317d91 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:14 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1868868591; not ready for session (expect reconnect)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:14 np0005634532 python3[88917]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_dashboard.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:14 np0005634532 python3[88988]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358194.189441-37997-212213146484879/source dest=/tmp/ceph_dashboard.yml mode=0644 force=True follow=False _original_basename=ceph_monitoring_stack.yml.j2 checksum=2701faaa92cae31b5bbad92984c27e2af7a44b84 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:43:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e27 e27: 3 total, 2 up, 3 in
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1868868591; not ready for session (expect reconnect)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 2 up, 3 in
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 16107e02-f66d-4168-a215-2da2ef93dcef (PG autoscaler increasing pool 4 PGs from 1 to 32)
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=27 pruub=8.109045982s) [] r=-1 lpr=27 pi=[13,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.004920959s@ mbc={}] PeeringState::start_peering_interval up [] -> [], acting [] -> [], acting_primary ? -> -1, up_primary ? -> -1, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:15 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 27 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=27 pruub=8.109045982s) [] r=-1 lpr=27 pi=[13,27)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.004920959s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: Saving service ingress.rgw.default spec with placement count:2
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='osd.2 ' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-2", "root=default"]}]': finished
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:15 np0005634532 python3[89038]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_dashboard.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.358216261 +0000 UTC m=+0.034780196 container create 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:15 np0005634532 systemd[1]: Started libpod-conmon-8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6.scope.
Mar  1 04:43:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98daed3588499afa864956fbb610ff083ae1cbfec262e8a4e42dab1dbee4aea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98daed3588499afa864956fbb610ff083ae1cbfec262e8a4e42dab1dbee4aea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a98daed3588499afa864956fbb610ff083ae1cbfec262e8a4e42dab1dbee4aea/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.431301138 +0000 UTC m=+0.107865093 container init 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.436228371 +0000 UTC m=+0.112792286 container start 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.341775602 +0000 UTC m=+0.018339547 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.440095927 +0000 UTC m=+0.116659882 container attach 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14304 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service node-exporter spec with placement *
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service node-exporter spec with placement *
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service grafana spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service grafana spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service prometheus spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service prometheus spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service alertmanager spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service alertmanager spec with placement compute-0;count:1
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Mar  1 04:43:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:15 np0005634532 musing_bardeen[89054]: Scheduled node-exporter update...
Mar  1 04:43:15 np0005634532 musing_bardeen[89054]: Scheduled grafana update...
Mar  1 04:43:15 np0005634532 musing_bardeen[89054]: Scheduled prometheus update...
Mar  1 04:43:15 np0005634532 musing_bardeen[89054]: Scheduled alertmanager update...
Mar  1 04:43:15 np0005634532 systemd[1]: libpod-8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6.scope: Deactivated successfully.
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.896970366 +0000 UTC m=+0.573534301 container died 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:43:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a98daed3588499afa864956fbb610ff083ae1cbfec262e8a4e42dab1dbee4aea-merged.mount: Deactivated successfully.
Mar  1 04:43:15 np0005634532 podman[89039]: 2026-03-01 09:43:15.97032435 +0000 UTC m=+0.646888275 container remove 8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6 (image=quay.io/ceph/ceph:v19, name=musing_bardeen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:15 np0005634532 systemd[1]: libpod-conmon-8e546da0bd4bf980ab930443921ca6ca536fe8168322da449574a487a77ba7b6.scope: Deactivated successfully.
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1868868591; not ready for session (expect reconnect)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e28 e28: 3 total, 2 up, 3 in
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 2 up, 3 in
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 5274f7cf-cc9c-4489-bf94-3073e03cf196 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Adjusting osd_memory_target on compute-2 to 127.9M
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-2 to 127.9M
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0)
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-2 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-2 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:16 np0005634532 python3[89118]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:16 np0005634532 podman[89144]: 2026-03-01 09:43:16.592216212 +0000 UTC m=+0.059317676 container create b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:16 np0005634532 systemd[1]: Started libpod-conmon-b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b.scope.
Mar  1 04:43:16 np0005634532 podman[89144]: 2026-03-01 09:43:16.5643967 +0000 UTC m=+0.031498234 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484fe2dc429407867ce7245a59212b7006294e9e01fa0a67a2c485d7a3677db/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484fe2dc429407867ce7245a59212b7006294e9e01fa0a67a2c485d7a3677db/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c484fe2dc429407867ce7245a59212b7006294e9e01fa0a67a2c485d7a3677db/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:16 np0005634532 podman[89144]: 2026-03-01 09:43:16.691590743 +0000 UTC m=+0.158692217 container init b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Mar  1 04:43:16 np0005634532 podman[89144]: 2026-03-01 09:43:16.700954176 +0000 UTC m=+0.168055610 container start b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:43:16 np0005634532 podman[89144]: 2026-03-01 09:43:16.704603576 +0000 UTC m=+0.171705110 container attach b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v82: 69 pgs: 31 unknown, 38 active+clean; 449 KiB data, 53 MiB used, 40 GiB / 40 GiB avail
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/server_port}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1739908411' entity='client.admin' 
Mar  1 04:43:17 np0005634532 systemd[1]: libpod-b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b.scope: Deactivated successfully.
Mar  1 04:43:17 np0005634532 podman[89144]: 2026-03-01 09:43:17.114225501 +0000 UTC m=+0.581326965 container died b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.102:6800/1868868591; not ready for session (expect reconnect)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Mar  1 04:43:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c484fe2dc429407867ce7245a59212b7006294e9e01fa0a67a2c485d7a3677db-merged.mount: Deactivated successfully.
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.102:6800/1868868591,v1:192.168.122.102:6801/1868868591] boot
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 514b8a61-7d3b-4b62-bb1d-991d05eb0c34 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:17 np0005634532 podman[89144]: 2026-03-01 09:43:17.167380603 +0000 UTC m=+0.634482027 container remove b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b (image=quay.io/ceph/ceph:v19, name=agitated_ishizaka, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.17( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.19( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.18( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.15( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.14( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.13( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.11( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.12( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.10( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.16( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.f( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.e( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.d( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.b( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.c( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.a( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.7( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=29 pruub=6.078636169s) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.004920959s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=15.103458405s) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active pruub 72.029739380s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.0( empty local-lis/les=13/14 n=0 ec=13/13 lis/c=13/13 les/c/f=14/14/0 sis=29 pruub=6.078620434s) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 63.004920959s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=7.195207119s) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.121566772s@ mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.6( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.5( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.2( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.3( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.8( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.4( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1a( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1b( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1d( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1c( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1e( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] PeeringState::start_peering_interval up [] -> [2], acting [] -> [2], acting_primary ? -> 2, up_primary ? -> 2, role -1 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.1f( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[3.9( empty local-lis/les=13/14 n=0 ec=27/13 lis/c=13/13 les/c/f=14/14/0 sis=29) [2] r=-1 lpr=29 pi=[13,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[4.0( empty local-lis/les=14/15 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29 pruub=15.103458405s) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown pruub 72.029739380s@ mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:17 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 29 pg[5.0( empty local-lis/les=15/16 n=0 ec=15/15 lis/c=15/15 les/c/f=16/16/0 sis=29 pruub=7.192754745s) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 64.121566772s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: Saving service node-exporter spec with placement *
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: Saving service grafana spec with placement compute-0;count:1
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: Saving service prometheus spec with placement compute-0;count:1
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: Saving service alertmanager spec with placement compute-0;count:1
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: OSD bench result of 9630.898202 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1739908411' entity='client.admin' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: osd.2 [v2:192.168.122.102:6800/1868868591,v1:192.168.122.102:6801/1868868591] boot
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:43:17 np0005634532 systemd[1]: libpod-conmon-b6da3713767ff4688c467c81cac2335aacc69430fbc9445ca99058f1e041233b.scope: Deactivated successfully.
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 python3[89592]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl_server_port 8443 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:17 np0005634532 podman[89644]: 2026-03-01 09:43:17.524452691 +0000 UTC m=+0.041264457 container create dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:17 np0005634532 systemd[1]: Started libpod-conmon-dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9.scope.
Mar  1 04:43:17 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fa61f9557afd96ad0a25b5d7f9280ef56ee0390636ee0744530e7594d3ee7c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fa61f9557afd96ad0a25b5d7f9280ef56ee0390636ee0744530e7594d3ee7c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90fa61f9557afd96ad0a25b5d7f9280ef56ee0390636ee0744530e7594d3ee7c/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:17 np0005634532 podman[89644]: 2026-03-01 09:43:17.503415007 +0000 UTC m=+0.020226773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:17 np0005634532 podman[89644]: 2026-03-01 09:43:17.621468053 +0000 UTC m=+0.138279859 container init dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:17 np0005634532 podman[89644]: 2026-03-01 09:43:17.630433255 +0000 UTC m=+0.147245021 container start dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:17 np0005634532 podman[89644]: 2026-03-01 09:43:17.63705652 +0000 UTC m=+0.153868286 container attach dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl_server_port}] v 0)
Mar  1 04:43:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2249581176' entity='client.admin' 
Mar  1 04:43:18 np0005634532 systemd[1]: libpod-dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9.scope: Deactivated successfully.
Mar  1 04:43:18 np0005634532 podman[89644]: 2026-03-01 09:43:18.004205129 +0000 UTC m=+0.521016865 container died dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-90fa61f9557afd96ad0a25b5d7f9280ef56ee0390636ee0744530e7594d3ee7c-merged.mount: Deactivated successfully.
Mar  1 04:43:18 np0005634532 systemd[77178]: Starting Mark boot as successful...
Mar  1 04:43:18 np0005634532 systemd[77178]: Finished Mark boot as successful.
Mar  1 04:43:18 np0005634532 podman[89644]: 2026-03-01 09:43:18.080858444 +0000 UTC m=+0.597670180 container remove dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9 (image=quay.io/ceph/ceph:v19, name=brave_mccarthy, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:18 np0005634532 systemd[1]: libpod-conmon-dabf5b4863bdff2b078f461c1bfc50e4eb40becd7c79c2b16ae954e584efe1a9.scope: Deactivated successfully.
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 7003f203-2fec-4ca5-8295-33f9ad836130 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 1466a821-111b-48ca-8735-ba02166ea088 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 1466a821-111b-48ca-8735-ba02166ea088 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 5 seconds
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev e0e45573-9762-4004-b8e5-ad7b70317d91 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event e0e45573-9762-4004-b8e5-ad7b70317d91 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 4 seconds
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 16107e02-f66d-4168-a215-2da2ef93dcef (PG autoscaler increasing pool 4 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 16107e02-f66d-4168-a215-2da2ef93dcef (PG autoscaler increasing pool 4 PGs from 1 to 32) in 3 seconds
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 5274f7cf-cc9c-4489-bf94-3073e03cf196 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 5274f7cf-cc9c-4489-bf94-3073e03cf196 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 2 seconds
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 514b8a61-7d3b-4b62-bb1d-991d05eb0c34 (PG autoscaler increasing pool 6 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 514b8a61-7d3b-4b62-bb1d-991d05eb0c34 (PG autoscaler increasing pool 6 PGs from 1 to 32) in 1 seconds
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 7003f203-2fec-4ca5-8295-33f9ad836130 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 7003f203-2fec-4ca5-8295-33f9ad836130 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 0 seconds
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.11( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.10( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.13( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.12( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.15( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.14( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Adjusting osd_memory_target on compute-2 to 127.9M
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Unable to set osd_memory_target on compute-2 to 134189056: error parsing value: Value '134189056' is below minimum 939524096
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2249581176' entity='client.admin' 
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.17( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.16( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.9( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.8( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.6( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.3( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.7( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.4( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.5( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.e( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.f( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1c( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1d( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.2( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1a( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.1b( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.18( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[5.19( empty local-lis/les=15/16 n=0 ec=29/15 lis/c=15/15 les/c/f=16/16/0 sis=29) [2] r=-1 lpr=29 pi=[15,29)/1 crt=0'0 mlcod 0'0 unknown NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=14/15 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.10( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.11( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.12( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.16( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.17( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.7( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.0( empty local-lis/les=29/30 n=0 ec=14/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.4( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 30 pg[4.f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=14/14 les/c/f=15/15/0 sis=29) [0] r=0 lpr=29 pi=[14,29)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.245787445 +0000 UTC m=+0.065546371 container create ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:18 np0005634532 systemd[1]: Started libpod-conmon-ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f.scope.
Mar  1 04:43:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.224328562 +0000 UTC m=+0.044087538 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.318462692 +0000 UTC m=+0.138221668 container init ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.324552473 +0000 UTC m=+0.144311409 container start ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:43:18 np0005634532 blissful_euclid[89831]: 167 167
Mar  1 04:43:18 np0005634532 systemd[1]: libpod-ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f.scope: Deactivated successfully.
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.366991709 +0000 UTC m=+0.186750675 container attach ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.367541922 +0000 UTC m=+0.187300888 container died ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:43:18 np0005634532 python3[89833]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/ssl false _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d9b86aca70806273eca857f535a25ad1c352f65e1c901fa1aa26a19a8ec20796-merged.mount: Deactivated successfully.
Mar  1 04:43:18 np0005634532 podman[89789]: 2026-03-01 09:43:18.519599373 +0000 UTC m=+0.339358339 container remove ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_euclid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:43:18 np0005634532 systemd[1]: libpod-conmon-ba223c19820d85bc6362566f78a71b48b026846f06228b6fff55557b83ba6b4f.scope: Deactivated successfully.
Mar  1 04:43:18 np0005634532 podman[89848]: 2026-03-01 09:43:18.538822291 +0000 UTC m=+0.100512330 container create 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:43:18 np0005634532 podman[89848]: 2026-03-01 09:43:18.481588048 +0000 UTC m=+0.043278197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:18 np0005634532 systemd[1]: Started libpod-conmon-0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966.scope.
Mar  1 04:43:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea62e51823bfbea05224f237ad16914332cd6b01b30a67bdce75a66edddee178/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea62e51823bfbea05224f237ad16914332cd6b01b30a67bdce75a66edddee178/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea62e51823bfbea05224f237ad16914332cd6b01b30a67bdce75a66edddee178/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 podman[89848]: 2026-03-01 09:43:18.636237262 +0000 UTC m=+0.197927391 container init 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:43:18 np0005634532 podman[89848]: 2026-03-01 09:43:18.642704063 +0000 UTC m=+0.204394152 container start 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:43:18 np0005634532 podman[89848]: 2026-03-01 09:43:18.662080024 +0000 UTC m=+0.223770113 container attach 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:18 np0005634532 podman[89877]: 2026-03-01 09:43:18.691520266 +0000 UTC m=+0.072048452 container create 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:43:18 np0005634532 systemd[1]: Started libpod-conmon-8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1.scope.
Mar  1 04:43:18 np0005634532 podman[89877]: 2026-03-01 09:43:18.645702957 +0000 UTC m=+0.026231233 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:18 np0005634532 podman[89877]: 2026-03-01 09:43:18.800428654 +0000 UTC m=+0.180956880 container init 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:43:18 np0005634532 podman[89877]: 2026-03-01 09:43:18.808062034 +0000 UTC m=+0.188590220 container start 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:18 np0005634532 podman[89877]: 2026-03-01 09:43:18.813600402 +0000 UTC m=+0.194128628 container attach 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:43:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v85: 131 pgs: 33 peering, 62 unknown, 36 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ssl}] v 0)
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1021148069' entity='client.admin' 
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966.scope: Deactivated successfully.
Mar  1 04:43:19 np0005634532 podman[89848]: 2026-03-01 09:43:19.03354984 +0000 UTC m=+0.595239899 container died 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ea62e51823bfbea05224f237ad16914332cd6b01b30a67bdce75a66edddee178-merged.mount: Deactivated successfully.
Mar  1 04:43:19 np0005634532 wonderful_aryabhata[89896]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:43:19 np0005634532 wonderful_aryabhata[89896]: --> All data devices are unavailable
Mar  1 04:43:19 np0005634532 podman[89848]: 2026-03-01 09:43:19.091144812 +0000 UTC m=+0.652834861 container remove 0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966 (image=quay.io/ceph/ceph:v19, name=zealous_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-conmon-0ca6763adfba1b24fd964fd262c3afee0ace220e0b75c68e8145da0b3139f966.scope: Deactivated successfully.
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1.scope: Deactivated successfully.
Mar  1 04:43:19 np0005634532 podman[89877]: 2026-03-01 09:43:19.108131425 +0000 UTC m=+0.488659611 container died 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2b44e20f1ce39603b034b144fcf81154e5da2f592670b65b850a51114312becd-merged.mount: Deactivated successfully.
Mar  1 04:43:19 np0005634532 podman[89877]: 2026-03-01 09:43:19.180060543 +0000 UTC m=+0.560588769 container remove 8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-conmon-8742d46742a350dbabf1e85aaa386f3dca7e565d7d6221299ed6b8c5f29138f1.scope: Deactivated successfully.
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Mar  1 04:43:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.1f scrub starts
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:19 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1021148069' entity='client.admin' 
Mar  1 04:43:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.1f scrub ok
Mar  1 04:43:19 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=14.876777649s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active pruub 74.208984375s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:19 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 31 pg[6.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31 pruub=14.876777649s) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown pruub 74.208984375s@ mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:19 np0005634532 python3[90039]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a -f 'name=ceph-?(.*)-mgr.*' --format \{\{\.Command\}\} --no-trunc#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.719331371 +0000 UTC m=+0.052375823 container create d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 04:43:19 np0005634532 systemd[1]: Started libpod-conmon-d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966.scope.
Mar  1 04:43:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.790326586 +0000 UTC m=+0.123371018 container init d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.695463018 +0000 UTC m=+0.028507460 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.796222213 +0000 UTC m=+0.129266665 container start d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:19 np0005634532 keen_jones[90101]: 167 167
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966.scope: Deactivated successfully.
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.80213112 +0000 UTC m=+0.135175562 container attach d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True)
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.802506739 +0000 UTC m=+0.135551151 container died d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 04:43:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-56c0a3641d81bcf4c20e166dc3018c62c2d5596d201521dd1490f8924a26b836-merged.mount: Deactivated successfully.
Mar  1 04:43:19 np0005634532 podman[90072]: 2026-03-01 09:43:19.920982915 +0000 UTC m=+0.254027327 container remove d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_jones, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:19 np0005634532 systemd[1]: libpod-conmon-d16cdecd7e6417e69df8e9b6d59cfa95b9af20f4a612a5ddb3213b63a536e966.scope: Deactivated successfully.
Mar  1 04:43:20 np0005634532 podman[90153]: 2026-03-01 09:43:20.088451039 +0000 UTC m=+0.051039540 container create a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:20 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 11 completed events
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:20 np0005634532 systemd[1]: Started libpod-conmon-a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59.scope.
Mar  1 04:43:20 np0005634532 python3[90147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-0.ebwufc/server_addr 192.168.122.100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:20 np0005634532 podman[90153]: 2026-03-01 09:43:20.057637343 +0000 UTC m=+0.020225824 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c299279204b170b0515f53754948fbcd7f9a5bb66d694cf53ebcac7977f801/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c299279204b170b0515f53754948fbcd7f9a5bb66d694cf53ebcac7977f801/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c299279204b170b0515f53754948fbcd7f9a5bb66d694cf53ebcac7977f801/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6c299279204b170b0515f53754948fbcd7f9a5bb66d694cf53ebcac7977f801/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 podman[90153]: 2026-03-01 09:43:20.203278364 +0000 UTC m=+0.165866855 container init a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:20 np0005634532 podman[90153]: 2026-03-01 09:43:20.208327789 +0000 UTC m=+0.170916250 container start a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.218722068 +0000 UTC m=+0.055910721 container create 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Mar  1 04:43:20 np0005634532 podman[90153]: 2026-03-01 09:43:20.250255802 +0000 UTC m=+0.212844283 container attach a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Mar  1 04:43:20 np0005634532 systemd[1]: Started libpod-conmon-4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0.scope.
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=17/18 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.185457441 +0000 UTC m=+0.022646064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.18( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1f( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.6( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.4( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.0( empty local-lis/les=31/32 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.f( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.9( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.14( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.16( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.11( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.13( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 32 pg[6.10( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=17/17 les/c/f=18/18/0 sis=31) [0] r=0 lpr=31 pi=[17,31)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0768f53bd4e9d79cccc3a2d4555759b4aa75b86b57bdedd739d456e3cee3684/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0768f53bd4e9d79cccc3a2d4555759b4aa75b86b57bdedd739d456e3cee3684/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0768f53bd4e9d79cccc3a2d4555759b4aa75b86b57bdedd739d456e3cee3684/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.348126975 +0000 UTC m=+0.185315648 container init 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.352711599 +0000 UTC m=+0.189900222 container start 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.363089897 +0000 UTC m=+0.200278620 container attach 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]: {
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:    "0": [
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:        {
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "devices": [
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "/dev/loop3"
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            ],
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "lv_name": "ceph_lv0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "lv_size": "21470642176",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "name": "ceph_lv0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "tags": {
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.cluster_name": "ceph",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.crush_device_class": "",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.encrypted": "0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.osd_id": "0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.type": "block",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.vdo": "0",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:                "ceph.with_tpm": "0"
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            },
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "type": "block",
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:            "vg_name": "ceph_vg0"
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:        }
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]:    ]
Mar  1 04:43:20 np0005634532 friendly_bhaskara[90170]: }
Mar  1 04:43:20 np0005634532 systemd[1]: libpod-a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59.scope: Deactivated successfully.
Mar  1 04:43:20 np0005634532 podman[90218]: 2026-03-01 09:43:20.580936283 +0000 UTC m=+0.030051768 container died a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f6c299279204b170b0515f53754948fbcd7f9a5bb66d694cf53ebcac7977f801-merged.mount: Deactivated successfully.
Mar  1 04:43:20 np0005634532 podman[90218]: 2026-03-01 09:43:20.678435948 +0000 UTC m=+0.127551433 container remove a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_bhaskara, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:20 np0005634532 systemd[1]: libpod-conmon-a5649d1b01a94146b29d22cdbd98c21458ff2cbfca2ecff31f1b9013d268ba59.scope: Deactivated successfully.
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-0.ebwufc/server_addr}] v 0)
Mar  1 04:43:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1386695648' entity='client.admin' 
Mar  1 04:43:20 np0005634532 systemd[1]: libpod-4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0.scope: Deactivated successfully.
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.820543071 +0000 UTC m=+0.657731724 container died 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 04:43:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d0768f53bd4e9d79cccc3a2d4555759b4aa75b86b57bdedd739d456e3cee3684-merged.mount: Deactivated successfully.
Mar  1 04:43:20 np0005634532 podman[90171]: 2026-03-01 09:43:20.894145751 +0000 UTC m=+0.731334364 container remove 4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0 (image=quay.io/ceph/ceph:v19, name=modest_poincare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:20 np0005634532 systemd[1]: libpod-conmon-4860e5044470255e2c99d0f81c955f17392c545ee6124c90850af9455e7b9fa0.scope: Deactivated successfully.
Mar  1 04:43:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v88: 193 pgs: 33 peering, 124 unknown, 36 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Mar  1 04:43:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Mar  1 04:43:21 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1386695648' entity='client.admin' 
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.326378627 +0000 UTC m=+0.121707797 container create f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.245284461 +0000 UTC m=+0.040613711 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:21 np0005634532 systemd[1]: Started libpod-conmon-f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd.scope.
Mar  1 04:43:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.391800144 +0000 UTC m=+0.187129314 container init f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.40008552 +0000 UTC m=+0.195414670 container start f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.404410788 +0000 UTC m=+0.199739938 container attach f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:43:21 np0005634532 gifted_pascal[90352]: 167 167
Mar  1 04:43:21 np0005634532 systemd[1]: libpod-f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd.scope: Deactivated successfully.
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.40612939 +0000 UTC m=+0.201458550 container died f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:43:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-79782a2ca8d86605927beb0b6f0fc397e4a3db61e3f87005bf46413aaedc7b73-merged.mount: Deactivated successfully.
Mar  1 04:43:21 np0005634532 podman[90336]: 2026-03-01 09:43:21.447098969 +0000 UTC m=+0.242428129 container remove f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_pascal, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:21 np0005634532 systemd[1]: libpod-conmon-f602c35eb83d72a18534189f3b5b6014db30e22adaa1233f16804dcd8c5bddcd.scope: Deactivated successfully.
Mar  1 04:43:21 np0005634532 podman[90375]: 2026-03-01 09:43:21.604166864 +0000 UTC m=+0.064092244 container create a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:21 np0005634532 systemd[1]: Started libpod-conmon-a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423.scope.
Mar  1 04:43:21 np0005634532 podman[90375]: 2026-03-01 09:43:21.576782863 +0000 UTC m=+0.036708293 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62af6b36894aae4dfa25e3d396e6499cefd628ba4ff89171f203bf548ad8dba0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62af6b36894aae4dfa25e3d396e6499cefd628ba4ff89171f203bf548ad8dba0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62af6b36894aae4dfa25e3d396e6499cefd628ba4ff89171f203bf548ad8dba0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62af6b36894aae4dfa25e3d396e6499cefd628ba4ff89171f203bf548ad8dba0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 podman[90375]: 2026-03-01 09:43:21.701636078 +0000 UTC m=+0.161561488 container init a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:43:21 np0005634532 podman[90375]: 2026-03-01 09:43:21.709337329 +0000 UTC m=+0.169262689 container start a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:43:21 np0005634532 podman[90375]: 2026-03-01 09:43:21.713874582 +0000 UTC m=+0.173800072 container attach a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:21 np0005634532 python3[90419]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-1.uyojxx/server_addr 192.168.122.101#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:21 np0005634532 podman[90423]: 2026-03-01 09:43:21.863205205 +0000 UTC m=+0.044534079 container create 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Mar  1 04:43:21 np0005634532 systemd[1]: Started libpod-conmon-2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df.scope.
Mar  1 04:43:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e69eeab9189c8ef7eb244973c792835729eb34f3560f36370e0694e538986/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e69eeab9189c8ef7eb244973c792835729eb34f3560f36370e0694e538986/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9f4e69eeab9189c8ef7eb244973c792835729eb34f3560f36370e0694e538986/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:21 np0005634532 podman[90423]: 2026-03-01 09:43:21.933272327 +0000 UTC m=+0.114601241 container init 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:43:21 np0005634532 podman[90423]: 2026-03-01 09:43:21.84288922 +0000 UTC m=+0.024218144 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:21 np0005634532 podman[90423]: 2026-03-01 09:43:21.940892136 +0000 UTC m=+0.122220990 container start 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:21 np0005634532 podman[90423]: 2026-03-01 09:43:21.944949527 +0000 UTC m=+0.126278431 container attach 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:22 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Mar  1 04:43:22 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Mar  1 04:43:22 np0005634532 lvm[90530]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:43:22 np0005634532 lvm[90530]: VG ceph_vg0 finished
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-1.uyojxx/server_addr}] v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3052871174' entity='client.admin' 
Mar  1 04:43:22 np0005634532 systemd[1]: libpod-2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df.scope: Deactivated successfully.
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/3052871174' entity='client.admin' 
Mar  1 04:43:22 np0005634532 reverent_heisenberg[90417]: {}
Mar  1 04:43:22 np0005634532 systemd[1]: libpod-a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423.scope: Deactivated successfully.
Mar  1 04:43:22 np0005634532 podman[90375]: 2026-03-01 09:43:22.368504147 +0000 UTC m=+0.828429507 container died a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Mar  1 04:43:22 np0005634532 podman[90535]: 2026-03-01 09:43:22.368929858 +0000 UTC m=+0.032720245 container died 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9f4e69eeab9189c8ef7eb244973c792835729eb34f3560f36370e0694e538986-merged.mount: Deactivated successfully.
Mar  1 04:43:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-62af6b36894aae4dfa25e3d396e6499cefd628ba4ff89171f203bf548ad8dba0-merged.mount: Deactivated successfully.
Mar  1 04:43:22 np0005634532 podman[90375]: 2026-03-01 09:43:22.413948087 +0000 UTC m=+0.873873437 container remove a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:22 np0005634532 systemd[1]: libpod-conmon-a45d314cc031f14458ee095313230f0746b27141d182edea4f3d5c28a6a2b423.scope: Deactivated successfully.
Mar  1 04:43:22 np0005634532 podman[90535]: 2026-03-01 09:43:22.442915947 +0000 UTC m=+0.106706344 container remove 2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df (image=quay.io/ceph/ceph:v19, name=unruffled_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:43:22 np0005634532 systemd[1]: libpod-conmon-2161abbffb293464c599b1f652da77268d3919d429294078a2bc66246f6823df.scope: Deactivated successfully.
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:22 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 8ff27a68-3bb4-4da6-8b2b-2fb7d83d6f37 (Updating rgw.rgw deployment (+3 -> 3))
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zizzzn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zizzzn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zizzzn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-2.zizzzn on compute-2
Mar  1 04:43:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-2.zizzzn on compute-2
Mar  1 04:43:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v89: 193 pgs: 33 peering, 31 unknown, 129 active+clean; 449 KiB data, 480 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:23 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Mar  1 04:43:23 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zizzzn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-2.zizzzn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: Deploying daemon rgw.rgw.compute-2.zizzzn on compute-2
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wbcorv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:43:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wbcorv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wbcorv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-1.wbcorv on compute-1
Mar  1 04:43:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-1.wbcorv on compute-1
Mar  1 04:43:24 np0005634532 python3[90588]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/dashboard/compute-2.dikzlj/server_addr 192.168.122.102#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.37115416 +0000 UTC m=+0.051085641 container create 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 04:43:24 np0005634532 systemd[1]: Started libpod-conmon-6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708.scope.
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.354515996 +0000 UTC m=+0.034447457 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d040efe648b714cf9498353dc6ac075046f6f3aa060ff5967f205661a9d41e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d040efe648b714cf9498353dc6ac075046f6f3aa060ff5967f205661a9d41e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97d040efe648b714cf9498353dc6ac075046f6f3aa060ff5967f205661a9d41e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.47653678 +0000 UTC m=+0.156468301 container init 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.484608371 +0000 UTC m=+0.164539852 container start 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.488840516 +0000 UTC m=+0.168772047 container attach 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wbcorv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-1.wbcorv", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: Deploying daemon rgw.rgw.compute-1.wbcorv on compute-1
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/compute-2.dikzlj/server_addr}] v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1614078276' entity='client.admin' 
Mar  1 04:43:24 np0005634532 systemd[1]: libpod-6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708.scope: Deactivated successfully.
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.892232315 +0000 UTC m=+0.572163766 container died 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-97d040efe648b714cf9498353dc6ac075046f6f3aa060ff5967f205661a9d41e-merged.mount: Deactivated successfully.
Mar  1 04:43:24 np0005634532 podman[90589]: 2026-03-01 09:43:24.937697386 +0000 UTC m=+0.617628827 container remove 6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708 (image=quay.io/ceph/ceph:v19, name=tender_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:24 np0005634532 systemd[1]: libpod-conmon-6ab69aad7bd431e26c92940ae98b80fe1b7e625b09dec3cd9024d7ca73c15708.scope: Deactivated successfully.
Mar  1 04:43:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v90: 193 pgs: 193 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [0] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229442596s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967750549s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.18( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229403496s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967750549s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.297338486s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.035896301s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229162216s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967712402s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229204178s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967712402s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1b( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.297307968s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.035896301s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.19( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229085922s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967712402s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.229099274s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967712402s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.228911400s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967704773s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1b( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.228888512s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967704773s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.297141075s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036048889s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.19( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.297118187s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036048889s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.228519440s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967483521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.228505135s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967483521s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.289897919s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.028984070s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.289880753s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.028984070s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227990150s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967483521s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227968216s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967460632s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227885246s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967460632s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.e( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227960587s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967483521s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296473503s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.035995483s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296201706s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.035995483s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227598190s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967437744s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.5( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227580070s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967437744s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296989441s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036865234s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296215057s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036178589s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296961784s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036865234s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.7( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.296203613s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036178589s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227241516s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967361450s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.2( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227203369s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967361450s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295773506s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036155701s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.d( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295757294s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036155701s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226943016s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967353821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.227016449s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967445374s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226916313s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967353821s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295922279s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036445618s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.2( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295907974s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036445618s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226801872s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967391968s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295714378s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036346436s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.6( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226770401s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967391968s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.3( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295701027s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036346436s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295632362s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036376953s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226524353s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967315674s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.5( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295596123s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036376953s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.d( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226498604s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967315674s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.3( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226603508s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967445374s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295457840s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036422729s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.e( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295445442s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036422729s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226313591s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967338562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.c( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226279259s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967338562s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226197243s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967323303s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.a( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.226154327s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967323303s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295234680s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036567688s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225933075s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967262268s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225910187s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967254639s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.a( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295217514s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036567688s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.8( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225865364s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967254639s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.9( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225887299s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967262268s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225663185s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967193604s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295126915s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036659241s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.15( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225646973s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967193604s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.17( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.295109749s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036659241s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225310326s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967002869s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.14( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225296974s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967002869s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225230217s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.967033386s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294599533s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036483765s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.13( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.225182533s) [1] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.967033386s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.8( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294566154s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036483765s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.224924088s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active pruub 73.966880798s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294832230s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036788940s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[4.1f( empty local-lis/les=29/30 n=0 ec=29/14 lis/c=29/29 les/c/f=30/30/0 sis=33 pruub=9.224905014s) [2] r=-1 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 73.966880798s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.12( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294814110s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036788940s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294553757s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036712646s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294546127s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active pruub 76.036842346s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.1c( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294527054s) [2] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036842346s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[6.15( empty local-lis/les=31/32 n=0 ec=31/17 lis/c=31/31 les/c/f=32/32/0 sis=33 pruub=11.294522285s) [1] r=-1 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 76.036712646s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0)
Mar  1 04:43:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.19( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.13( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.10( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.1d( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.4( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.5( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.1( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.2( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.7( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.3( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.6( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.6( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.12( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.14( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.c( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.a( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.19( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.1e( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.18( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.17( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.17( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.b( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.1e( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[5.19( empty local-lis/les=0/0 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[3.1f( empty local-lis/les=0/0 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.b( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.e( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.8( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.9( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.e( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.6( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.1( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.4( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.4( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.6( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.3( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.2( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.9( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.1e( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.f( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.18( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[7.1b( empty local-lis/les=0/0 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.1e( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 33 pg[2.1f( empty local-lis/les=0/0 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:25 np0005634532 ceph-mgr[76134]: [progress WARNING root] Starting Global Recovery Event,1 pgs not in active + clean state
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e33 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:25 np0005634532 python3[90668]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.18 scrub starts
Mar  1 04:43:25 np0005634532 podman[90669]: 2026-03-01 09:43:25.341362712 +0000 UTC m=+0.057608713 container create b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:25 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.18 scrub ok
Mar  1 04:43:25 np0005634532 systemd[1]: Started libpod-conmon-b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d.scope.
Mar  1 04:43:25 np0005634532 podman[90669]: 2026-03-01 09:43:25.317353225 +0000 UTC m=+0.033599246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0587e6225d7d57415ecb29af60b89067f505fd69181ca574a94cef772c0657be/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0587e6225d7d57415ecb29af60b89067f505fd69181ca574a94cef772c0657be/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0587e6225d7d57415ecb29af60b89067f505fd69181ca574a94cef772c0657be/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:25 np0005634532 podman[90669]: 2026-03-01 09:43:25.451113891 +0000 UTC m=+0.167359972 container init b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:25 np0005634532 podman[90669]: 2026-03-01 09:43:25.45632209 +0000 UTC m=+0.172568071 container start b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:25 np0005634532 podman[90669]: 2026-03-01 09:43:25.460584066 +0000 UTC m=+0.176830077 container attach b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1614078276' entity='client.admin' 
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/1100731101' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dvtuyn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dvtuyn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dvtuyn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.dvtuyn on compute-0
Mar  1 04:43:25 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.dvtuyn on compute-0
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4090547352' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Mar  1 04:43:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.19( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.1e( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.1f( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.1b( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.1f( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.1e( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.18( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.1d( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.1e( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.4( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.9( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.5( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.2( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.4( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.1( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.2( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.3( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.6( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.3( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.6( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.6( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.4( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.1( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.7( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.c( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.f( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.e( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.b( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [0] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.8( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.9( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.6( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.a( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.14( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.17( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.b( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.e( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.12( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.10( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.18( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[7.13( empty local-lis/les=33/34 n=0 ec=31/18 lis/c=31/31 les/c/f=32/32/0 sis=33) [0] r=0 lpr=33 pi=[31,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[2.19( empty local-lis/les=33/34 n=0 ec=27/12 lis/c=27/27 les/c/f=28/28/0 sis=33) [0] r=0 lpr=33 pi=[27,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.17( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[3.19( empty local-lis/les=33/34 n=0 ec=27/13 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 34 pg[5.1e( empty local-lis/les=33/34 n=0 ec=29/15 lis/c=29/29 les/c/f=30/30/0 sis=33) [0] r=0 lpr=33 pi=[29,33)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.27360646 +0000 UTC m=+0.053870341 container create 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:43:26 np0005634532 systemd[1]: Started libpod-conmon-373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5.scope.
Mar  1 04:43:26 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.1f scrub starts
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.246132767 +0000 UTC m=+0.026396708 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.342760139 +0000 UTC m=+0.123023990 container init 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:26 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.1f scrub ok
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.347514747 +0000 UTC m=+0.127778598 container start 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.350586944 +0000 UTC m=+0.130850825 container attach 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:43:26 np0005634532 xenodochial_hamilton[90817]: 167 167
Mar  1 04:43:26 np0005634532 systemd[1]: libpod-373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5.scope: Deactivated successfully.
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.353297861 +0000 UTC m=+0.133561742 container died 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b0d58a86fb17c482b46a58ad1cdfaa599f0d45a8fef1173b989fa3707c0552b7-merged.mount: Deactivated successfully.
Mar  1 04:43:26 np0005634532 podman[90801]: 2026-03-01 09:43:26.397643004 +0000 UTC m=+0.177906855 container remove 373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=xenodochial_hamilton, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:43:26 np0005634532 systemd[1]: libpod-conmon-373d23436623e4dca496165122d2afe29ddde2671252862707cdf35eed56c5e5.scope: Deactivated successfully.
Mar  1 04:43:26 np0005634532 systemd[1]: Reloading.
Mar  1 04:43:26 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:43:26 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dvtuyn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.dvtuyn", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: Deploying daemon rgw.rgw.compute-0.dvtuyn on compute-0
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4090547352' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Mar  1 04:43:26 np0005634532 systemd[1]: Reloading.
Mar  1 04:43:26 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:43:26 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4090547352' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Mar  1 04:43:26 np0005634532 nifty_nash[90684]: module 'dashboard' is already disabled
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e12: compute-0.ebwufc(active, since 2m), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:43:26 np0005634532 podman[90669]: 2026-03-01 09:43:26.936410929 +0000 UTC m=+1.652656940 container died b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 04:43:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v93: 194 pgs: 194 active+clean; 450 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.0 KiB/s wr, 4 op/s
Mar  1 04:43:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Mar  1 04:43:26 np0005634532 systemd[1]: libpod-b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d.scope: Deactivated successfully.
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Mar  1 04:43:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0587e6225d7d57415ecb29af60b89067f505fd69181ca574a94cef772c0657be-merged.mount: Deactivated successfully.
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 podman[90669]: 2026-03-01 09:43:27.021362001 +0000 UTC m=+1.737607972 container remove b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d (image=quay.io/ceph/ceph:v19, name=nifty_nash, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:43:27 np0005634532 systemd[1]: Starting Ceph rgw.rgw.compute-0.dvtuyn for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:43:27 np0005634532 systemd[1]: libpod-conmon-b4616c31f2c429b992fb5fb109ffd6ca74e1a3378d93459ef5a410f9d8957c8d.scope: Deactivated successfully.
Mar  1 04:43:27 np0005634532 podman[91015]: 2026-03-01 09:43:27.245797471 +0000 UTC m=+0.038222501 container create b35d2a5f6a0c3fd451160bfac8608c4f93b07015827f15646807616266dbf922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-rgw-rgw-compute-0-dvtuyn, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bddbf9b60ee73517567d291500782132c4e30d8b9d0b6ca51b95d64895d7448/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bddbf9b60ee73517567d291500782132c4e30d8b9d0b6ca51b95d64895d7448/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bddbf9b60ee73517567d291500782132c4e30d8b9d0b6ca51b95d64895d7448/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bddbf9b60ee73517567d291500782132c4e30d8b9d0b6ca51b95d64895d7448/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.dvtuyn supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 podman[91015]: 2026-03-01 09:43:27.305231959 +0000 UTC m=+0.097656999 container init b35d2a5f6a0c3fd451160bfac8608c4f93b07015827f15646807616266dbf922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-rgw-rgw-compute-0-dvtuyn, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:27 np0005634532 podman[91015]: 2026-03-01 09:43:27.310782187 +0000 UTC m=+0.103207217 container start b35d2a5f6a0c3fd451160bfac8608c4f93b07015827f15646807616266dbf922 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-rgw-rgw-compute-0-dvtuyn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 04:43:27 np0005634532 bash[91015]: b35d2a5f6a0c3fd451160bfac8608c4f93b07015827f15646807616266dbf922
Mar  1 04:43:27 np0005634532 podman[91015]: 2026-03-01 09:43:27.227747833 +0000 UTC m=+0.020172893 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:43:27 np0005634532 systemd[1]: Started Ceph rgw.rgw.compute-0.dvtuyn for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:43:27 np0005634532 python3[91016]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:27 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.c scrub starts
Mar  1 04:43:27 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:27 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.c scrub ok
Mar  1 04:43:27 np0005634532 radosgw[91037]: deferred set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:43:27 np0005634532 radosgw[91037]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process radosgw, pid 2
Mar  1 04:43:27 np0005634532 radosgw[91037]: framework: beast
Mar  1 04:43:27 np0005634532 radosgw[91037]: framework conf key: endpoint, val: 192.168.122.100:8082
Mar  1 04:43:27 np0005634532 radosgw[91037]: init_numa not setting numa affinity
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:27 np0005634532 podman[91038]: 2026-03-01 09:43:27.392031577 +0000 UTC m=+0.043072872 container create 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 8ff27a68-3bb4-4da6-8b2b-2fb7d83d6f37 (Updating rgw.rgw deployment (+3 -> 3))
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 8ff27a68-3bb4-4da6-8b2b-2fb7d83d6f37 (Updating rgw.rgw deployment (+3 -> 3)) in 5 seconds
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0)
Mar  1 04:43:27 np0005634532 systemd[1]: Started libpod-conmon-35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d.scope.
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev b2f23fdb-9207-4c0d-8a2a-9de2d1ddc4fa (Updating node-exporter deployment (+3 -> 3))
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-0 on compute-0
Mar  1 04:43:27 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-0 on compute-0
Mar  1 04:43:27 np0005634532 podman[91038]: 2026-03-01 09:43:27.370789849 +0000 UTC m=+0.021831184 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7556a392861ed7e7b762df655f6c9f88897f5fd4dea333f451ef206905cf0657/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7556a392861ed7e7b762df655f6c9f88897f5fd4dea333f451ef206905cf0657/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7556a392861ed7e7b762df655f6c9f88897f5fd4dea333f451ef206905cf0657/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:27 np0005634532 podman[91038]: 2026-03-01 09:43:27.493836998 +0000 UTC m=+0.144878363 container init 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:43:27 np0005634532 podman[91038]: 2026-03-01 09:43:27.501243183 +0000 UTC m=+0.152284508 container start 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:43:27 np0005634532 podman[91038]: 2026-03-01 09:43:27.505791746 +0000 UTC m=+0.156833071 container attach 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4090547352' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/2863278829' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/950956334' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: from='mgr.14122 192.168.122.100:0/3760237270' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:27 np0005634532 systemd[1]: Reloading.
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Mar  1 04:43:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4044458514' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Mar  1 04:43:27 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:43:27 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Mar  1 04:43:28 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [0] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:28 np0005634532 systemd[1]: Reloading.
Mar  1 04:43:28 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:43:28 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:43:28 np0005634532 systemd[1]: Starting Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:43:28 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.f scrub starts
Mar  1 04:43:28 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.f scrub ok
Mar  1 04:43:28 np0005634532 bash[91894]: Trying to pull quay.io/prometheus/node-exporter:v1.7.0...
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: Saving service rgw.rgw spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: Deploying daemon node-exporter.compute-0 on compute-0
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4044458514' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4044458514' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Mar  1 04:43:28 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e13: compute-0.ebwufc(active, since 2m), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:43:28 np0005634532 systemd[1]: libpod-35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-27.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 27 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd[1]: session-30.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-32.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 32 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd[1]: session-29.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-24.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-26.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-25.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-31.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-28.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 30 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setuser ceph since I am not root
Mar  1 04:43:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setgroup ceph since I am not root
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 33 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 29 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 31 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 26 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 25 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 24 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 podman[91908]: 2026-03-01 09:43:28.706662463 +0000 UTC m=+0.050522697 container died 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 28 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd[1]: session-23.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd[1]: session-21.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 27.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 21 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Session 23 logged out. Waiting for processes to exit.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 30.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 32.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 29.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 24.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 26.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 25.
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 31.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 28.
Mar  1 04:43:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7556a392861ed7e7b762df655f6c9f88897f5fd4dea333f451ef206905cf0657-merged.mount: Deactivated successfully.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 23.
Mar  1 04:43:28 np0005634532 systemd-logind[832]: Removed session 21.
Mar  1 04:43:28 np0005634532 podman[91908]: 2026-03-01 09:43:28.753493828 +0000 UTC m=+0.097354052 container remove 35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d (image=quay.io/ceph/ceph:v19, name=exciting_carson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:28 np0005634532 systemd[1]: libpod-conmon-35db8eee11cfd61e0707d8d03ba13fa4a63c103ab0b4b52613565efc3863d06d.scope: Deactivated successfully.
Mar  1 04:43:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:28.810+0000 7f99f6031140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:43:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:28.877+0000 7f99f6031140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:28 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:43:28 np0005634532 bash[91894]: Getting image source signatures
Mar  1 04:43:28 np0005634532 bash[91894]: Copying blob sha256:324153f2810a9927fcce320af9e4e291e0b6e805cbdd1f338386c756b9defa24
Mar  1 04:43:28 np0005634532 bash[91894]: Copying blob sha256:2abcce694348cd2c949c0e98a7400ebdfd8341021bcf6b541bc72033ce982510
Mar  1 04:43:28 np0005634532 bash[91894]: Copying blob sha256:455fd88e5221bc1e278ef2d059cd70e4df99a24e5af050ede621534276f6cf9a
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0)
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 python3[91970]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-username admin _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:29 np0005634532 podman[92002]: 2026-03-01 09:43:29.270959453 +0000 UTC m=+0.037559394 container create 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 04:43:29 np0005634532 systemd[1]: Started libpod-conmon-329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461.scope.
Mar  1 04:43:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab68162c7a84de77d55b71ad9dee74413d22943512770f349be70dc75441831/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab68162c7a84de77d55b71ad9dee74413d22943512770f349be70dc75441831/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab68162c7a84de77d55b71ad9dee74413d22943512770f349be70dc75441831/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:29 np0005634532 podman[92002]: 2026-03-01 09:43:29.254662278 +0000 UTC m=+0.021262249 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:29 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.4 deep-scrub starts
Mar  1 04:43:29 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.4 deep-scrub ok
Mar  1 04:43:29 np0005634532 podman[92002]: 2026-03-01 09:43:29.358440847 +0000 UTC m=+0.125040838 container init 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:43:29 np0005634532 podman[92002]: 2026-03-01 09:43:29.366109438 +0000 UTC m=+0.132709389 container start 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:43:29 np0005634532 podman[92002]: 2026-03-01 09:43:29.374517897 +0000 UTC m=+0.141117848 container attach 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:43:29 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/4044458514' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/2863278829' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/950956334' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Mar  1 04:43:29 np0005634532 bash[91894]: Copying config sha256:72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e
Mar  1 04:43:29 np0005634532 bash[91894]: Writing manifest to image destination
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:29.610+0000 7f99f6031140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:29 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:29 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:43:29 np0005634532 podman[91894]: 2026-03-01 09:43:29.636259445 +0000 UTC m=+1.114383547 container create e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:29 np0005634532 podman[91894]: 2026-03-01 09:43:29.622252427 +0000 UTC m=+1.100376569 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Mar  1 04:43:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8dbadd39b909e929e5c214bb459e113b3e61a8db0d76ae2659f3b8514c81fa26/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:29 np0005634532 podman[91894]: 2026-03-01 09:43:29.690026102 +0000 UTC m=+1.168150214 container init e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:29 np0005634532 podman[91894]: 2026-03-01 09:43:29.697849286 +0000 UTC m=+1.175973418 container start e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:29 np0005634532 bash[91894]: e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.704Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.705Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.705Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.705Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Mar  1 04:43:29 np0005634532 systemd[1]: Started Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.706Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.706Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=arp
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=bcache
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=bonding
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=btrfs
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=conntrack
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=cpu
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=cpufreq
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=diskstats
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=dmi
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=edac
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=entropy
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=fibrechannel
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=filefd
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=filesystem
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=hwmon
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=infiniband
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=ipvs
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=loadavg
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=mdadm
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=meminfo
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=netclass
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=netdev
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=netstat
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=nfs
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=nfsd
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=nvme
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=os
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=pressure
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=rapl
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=schedstat
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=selinux
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=sockstat
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=softnet
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=stat
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=tapestats
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=textfile
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=thermal_zone
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=time
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=udp_queues
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=uname
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=vmstat
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=xfs
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.707Z caller=node_exporter.go:117 level=info collector=zfs
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.709Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Mar  1 04:43:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[92073]: ts=2026-03-01T09:43:29.709Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Mar  1 04:43:29 np0005634532 systemd[1]: session-33.scope: Deactivated successfully.
Mar  1 04:43:29 np0005634532 systemd[1]: session-33.scope: Consumed 24.780s CPU time.
Mar  1 04:43:29 np0005634532 systemd-logind[832]: Removed session 33.
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e38 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:30.187+0000 7f99f6031140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:43:30 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.6 scrub starts
Mar  1 04:43:30 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.6 scrub ok
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:30.332+0000 7f99f6031140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:30.395+0000 7f99f6031140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:43:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:30.514+0000 7f99f6031140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:43:30 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:31 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0)
Mar  1 04:43:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:43:31 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.4 scrub starts
Mar  1 04:43:31 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.4 scrub ok
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.432+0000 7f99f6031140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.660+0000 7f99f6031140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.729+0000 7f99f6031140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.789+0000 7f99f6031140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.858+0000 7f99f6031140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:43:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:31.920+0000 7f99f6031140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:31 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0)
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/2863278829' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/950956334' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Mar  1 04:43:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [0] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:43:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:32.210+0000 7f99f6031140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:43:32 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.0 scrub starts
Mar  1 04:43:32 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.0 scrub ok
Mar  1 04:43:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:32.300+0000 7f99f6031140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:43:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:32.705+0000 7f99f6031140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:32 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.102:0/2863278829' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.101:0/950956334' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-1.wbcorv' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1811999405' entity='client.rgw.rgw.compute-0.dvtuyn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: from='client.? ' entity='client.rgw.rgw.compute-2.zizzzn' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.250+0000 7f99f6031140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:43:33 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.0 deep-scrub starts
Mar  1 04:43:33 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.0 deep-scrub ok
Mar  1 04:43:33 np0005634532 radosgw[91037]: v1 topic migration: starting v1 topic migration..
Mar  1 04:43:33 np0005634532 radosgw[91037]: LDAP not started since no server URIs were provided in the configuration.
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-rgw-rgw-compute-0-dvtuyn[91033]: 2026-03-01T09:43:33.316+0000 7f891f531980 -1 LDAP not started since no server URIs were provided in the configuration.
Mar  1 04:43:33 np0005634532 radosgw[91037]: v1 topic migration: finished v1 topic migration
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.322+0000 7f99f6031140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000001 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: framework: beast
Mar  1 04:43:33 np0005634532 radosgw[91037]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Mar  1 04:43:33 np0005634532 radosgw[91037]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: starting handler: beast
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:43:33 np0005634532 radosgw[91037]: mgrc service_daemon_register rgw.14370 metadata {arch=x86_64,ceph_release=squid,ceph_version=ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable),ceph_version_short=19.2.3,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.dvtuyn,kernel_description=#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026,kernel_version=5.14.0-686.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864280,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=cd5c293a-4523-4a5e-898c-09aafdf3802f,zone_name=default,zonegroup_id=488aad47-6726-4ab2-b81e-4590056a15ff,zonegroup_name=default}
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.402+0000 7f99f6031140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000012 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000014 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000015 to be held by another RGW process; skipping for now
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.549+0000 7f99f6031140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.621+0000 7f99f6031140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.780+0000 7f99f6031140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx restarted
Mar  1 04:43:33 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx started
Mar  1 04:43:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:33.994+0000 7f99f6031140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:33 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e14: compute-0.ebwufc(active, since 2m), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj restarted
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj started
Mar  1 04:43:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:34.248+0000 7f99f6031140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:43:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:34.317+0000 7f99f6031140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x55704bd43860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:43:34 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Mar  1 04:43:34 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e15: compute-0.ebwufc(active, starting, since 0.0316197s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e1 all = 1
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [balancer INFO root] Starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:43:34
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Manager daemon compute-0.ebwufc is now available
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: crash
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: dashboard
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO access_control] Loading user roles DB version=2
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO sso] Loading SSO DB version=1
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO root] Configured CherryPy, starting engine...
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: progress
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [progress INFO root] Loading...
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f9973371fd0>, <progress.module.GhostEvent object at 0x7f9973371f40>, <progress.module.GhostEvent object at 0x7f9973371040>, <progress.module.GhostEvent object at 0x7f997334b040>, <progress.module.GhostEvent object at 0x7f997334b070>, <progress.module.GhostEvent object at 0x7f997334b0a0>, <progress.module.GhostEvent object at 0x7f997334b0d0>, <progress.module.GhostEvent object at 0x7f997334b100>, <progress.module.GhostEvent object at 0x7f997334b130>, <progress.module.GhostEvent object at 0x7f997334b160>, <progress.module.GhostEvent object at 0x7f997334b190>] historic events
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: restful
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: status
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: vms, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: volumes, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: backups, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: images, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"} v 0)
Mar  1 04:43:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Mar  1 04:43:34 np0005634532 systemd-logind[832]: New session 34 of user ceph-admin.
Mar  1 04:43:34 np0005634532 systemd[1]: Started Session 34 of User ceph-admin.
Mar  1 04:43:34 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.module] Engine started.
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: Manager daemon compute-0.ebwufc is now available
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:43:35 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.f scrub starts
Mar  1 04:43:35 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.f scrub ok
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e16: compute-0.ebwufc(active, since 1.0595s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14388 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-username", "value": "admin", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_USERNAME}] v 0)
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:35 np0005634532 heuristic_cori[92034]: Option GRAFANA_API_USERNAME updated
Mar  1 04:43:35 np0005634532 systemd[1]: libpod-329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461.scope: Deactivated successfully.
Mar  1 04:43:35 np0005634532 podman[92002]: 2026-03-01 09:43:35.428794075 +0000 UTC m=+6.195394036 container died 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Mar  1 04:43:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8ab68162c7a84de77d55b71ad9dee74413d22943512770f349be70dc75441831-merged.mount: Deactivated successfully.
Mar  1 04:43:35 np0005634532 podman[92002]: 2026-03-01 09:43:35.465574309 +0000 UTC m=+6.232174250 container remove 329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461 (image=quay.io/ceph/ceph:v19, name=heuristic_cori, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 04:43:35 np0005634532 systemd[1]: libpod-conmon-329af5788ba7088ee022bb01520f2d2fa8df5eafcce556c79511fa09b8552461.scope: Deactivated successfully.
Mar  1 04:43:35 np0005634532 podman[92385]: 2026-03-01 09:43:35.5195007 +0000 UTC m=+0.065695924 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:35] ENGINE Bus STARTING
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:35] ENGINE Bus STARTING
Mar  1 04:43:35 np0005634532 podman[92385]: 2026-03-01 09:43:35.610729848 +0000 UTC m=+0.156925082 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:35] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:35] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:35] ENGINE Client ('192.168.122.100', 43408) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:35] ENGINE Client ('192.168.122.100', 43408) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:35 np0005634532 python3[92459]: ansible-ansible.legacy.command Invoked with stdin=/home/grafana_password.yml stdin_add_newline=False _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-password -i - _uses_shell=False strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:35] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:35] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:35] ENGINE Bus STARTED
Mar  1 04:43:35 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:35] ENGINE Bus STARTED
Mar  1 04:43:35 np0005634532 podman[92498]: 2026-03-01 09:43:35.817581841 +0000 UTC m=+0.047090802 container create 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:35 np0005634532 systemd[1]: Started libpod-conmon-83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a.scope.
Mar  1 04:43:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9bb6975eec7dda7e562e9217bcc3d7151e23e354c14020f793816a5474bd9b3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9bb6975eec7dda7e562e9217bcc3d7151e23e354c14020f793816a5474bd9b3/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f9bb6975eec7dda7e562e9217bcc3d7151e23e354c14020f793816a5474bd9b3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:35 np0005634532 podman[92498]: 2026-03-01 09:43:35.793984224 +0000 UTC m=+0.023493196 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:35 np0005634532 podman[92498]: 2026-03-01 09:43:35.898706188 +0000 UTC m=+0.128215149 container init 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:35 np0005634532 podman[92498]: 2026-03-01 09:43:35.90321947 +0000 UTC m=+0.132728411 container start 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:35 np0005634532 podman[92498]: 2026-03-01 09:43:35.90640068 +0000 UTC m=+0.135909621 container attach 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:36 np0005634532 podman[92608]: 2026-03-01 09:43:36.102269629 +0000 UTC m=+0.059347946 container exec e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:36 np0005634532 podman[92608]: 2026-03-01 09:43:36.137414883 +0000 UTC m=+0.094493100 container exec_died e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:35] ENGINE Bus STARTING
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:35] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:35] ENGINE Client ('192.168.122.100', 43408) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14409 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-password", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_PASSWORD}] v 0)
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:36 np0005634532 elated_kalam[92537]: Option GRAFANA_API_PASSWORD updated
Mar  1 04:43:36 np0005634532 systemd[1]: libpod-83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a.scope: Deactivated successfully.
Mar  1 04:43:36 np0005634532 podman[92498]: 2026-03-01 09:43:36.282727006 +0000 UTC m=+0.512235957 container died 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f9bb6975eec7dda7e562e9217bcc3d7151e23e354c14020f793816a5474bd9b3-merged.mount: Deactivated successfully.
Mar  1 04:43:36 np0005634532 podman[92498]: 2026-03-01 09:43:36.3219135 +0000 UTC m=+0.551422441 container remove 83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a (image=quay.io/ceph/ceph:v19, name=elated_kalam, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:36 np0005634532 systemd[1]: libpod-conmon-83895b76f382a1fa95c56fbb80b8ea2abaa859e9e649fc36c62d3de45e1ffd7a.scope: Deactivated successfully.
Mar  1 04:43:36 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.b deep-scrub starts
Mar  1 04:43:36 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.b deep-scrub ok
Mar  1 04:43:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v4: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:36 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 04:43:36 np0005634532 python3[92744]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-alertmanager-api-host http://192.168.122.100:9093#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:36 np0005634532 podman[92758]: 2026-03-01 09:43:36.70524242 +0000 UTC m=+0.043101132 container create 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:43:36 np0005634532 systemd[1]: Started libpod-conmon-728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b.scope.
Mar  1 04:43:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0372a03e54b161ca8e22b304df9d5f1ca737c63be0732b25600e0ad7280e02a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0372a03e54b161ca8e22b304df9d5f1ca737c63be0732b25600e0ad7280e02a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0372a03e54b161ca8e22b304df9d5f1ca737c63be0732b25600e0ad7280e02a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:36 np0005634532 podman[92758]: 2026-03-01 09:43:36.767705383 +0000 UTC m=+0.105564135 container init 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default)
Mar  1 04:43:36 np0005634532 podman[92758]: 2026-03-01 09:43:36.773449536 +0000 UTC m=+0.111308258 container start 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:36 np0005634532 podman[92758]: 2026-03-01 09:43:36.777566188 +0000 UTC m=+0.115424920 container attach 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:36 np0005634532 podman[92758]: 2026-03-01 09:43:36.687063848 +0000 UTC m=+0.024922590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14421 -' entity='client.admin' cmd=[{"prefix": "dashboard set-alertmanager-api-host", "value": "http://192.168.122.100:9093", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/ALERTMANAGER_API_HOST}] v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 brave_galois[92777]: Option ALERTMANAGER_API_HOST updated
Mar  1 04:43:37 np0005634532 systemd[1]: libpod-728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b.scope: Deactivated successfully.
Mar  1 04:43:37 np0005634532 podman[92758]: 2026-03-01 09:43:37.127899752 +0000 UTC m=+0.465758464 container died 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:43:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f0372a03e54b161ca8e22b304df9d5f1ca737c63be0732b25600e0ad7280e02a-merged.mount: Deactivated successfully.
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:37 np0005634532 podman[92758]: 2026-03-01 09:43:37.161939197 +0000 UTC m=+0.499797909 container remove 728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b (image=quay.io/ceph/ceph:v19, name=brave_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:37 np0005634532 systemd[1]: libpod-conmon-728abdccb69ffd073888c472358761c533aeb1d3e72af55902e40b9687a5a24b.scope: Deactivated successfully.
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:35] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:35] ENGINE Bus STARTED
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e17: compute-0.ebwufc(active, since 2s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:37 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.9 scrub starts
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.9 scrub ok
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:37 np0005634532 python3[92917]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-prometheus-api-host http://192.168.122.100:9092#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:37 np0005634532 podman[92976]: 2026-03-01 09:43:37.559218996 +0000 UTC m=+0.048712701 container create e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325)
Mar  1 04:43:37 np0005634532 systemd[1]: Started libpod-conmon-e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07.scope.
Mar  1 04:43:37 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09158695460da8991b87f4d14b9c379319b7f57be6ea69a826cf498287d4c7e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09158695460da8991b87f4d14b9c379319b7f57be6ea69a826cf498287d4c7e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c09158695460da8991b87f4d14b9c379319b7f57be6ea69a826cf498287d4c7e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:37 np0005634532 podman[92976]: 2026-03-01 09:43:37.531957229 +0000 UTC m=+0.021450954 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:37 np0005634532 podman[92976]: 2026-03-01 09:43:37.656708098 +0000 UTC m=+0.146201793 container init e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:37 np0005634532 podman[92976]: 2026-03-01 09:43:37.664387908 +0000 UTC m=+0.153881573 container start e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 04:43:37 np0005634532 podman[92976]: 2026-03-01 09:43:37.666909741 +0000 UTC m=+0.156403426 container attach e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:37 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14427 -' entity='client.admin' cmd=[{"prefix": "dashboard set-prometheus-api-host", "value": "http://192.168.122.100:9092", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/PROMETHEUS_API_HOST}] v 0)
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:38 np0005634532 quizzical_hopper[93032]: Option PROMETHEUS_API_HOST updated
Mar  1 04:43:38 np0005634532 systemd[1]: libpod-e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07.scope: Deactivated successfully.
Mar  1 04:43:38 np0005634532 podman[92976]: 2026-03-01 09:43:38.02313665 +0000 UTC m=+0.512630315 container died e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:43:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c09158695460da8991b87f4d14b9c379319b7f57be6ea69a826cf498287d4c7e-merged.mount: Deactivated successfully.
Mar  1 04:43:38 np0005634532 podman[92976]: 2026-03-01 09:43:38.054373176 +0000 UTC m=+0.543866841 container remove e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07 (image=quay.io/ceph/ceph:v19, name=quizzical_hopper, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:43:38 np0005634532 systemd[1]: libpod-conmon-e9e02dfcbef46c0ffaa770f06753dd6d8f649733447987e82ab528168f455a07.scope: Deactivated successfully.
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]': finished
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.b scrub starts
Mar  1 04:43:38 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.b scrub ok
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 python3[93413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   dashboard set-grafana-api-url http://192.168.122.100:3100#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.401315884 +0000 UTC m=+0.037991835 container create 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:38 np0005634532 systemd[1]: Started libpod-conmon-6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90.scope.
Mar  1 04:43:38 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6609a494e6aadaa497de237acae94a53e36d97366637499d7feefbec626e4c59/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6609a494e6aadaa497de237acae94a53e36d97366637499d7feefbec626e4c59/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6609a494e6aadaa497de237acae94a53e36d97366637499d7feefbec626e4c59/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.457292795 +0000 UTC m=+0.093968776 container init 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325)
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.460772141 +0000 UTC m=+0.097448092 container start 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.463715244 +0000 UTC m=+0.100391195 container attach 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.387546932 +0000 UTC m=+0.024222913 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14433 -' entity='client.admin' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "http://192.168.122.100:3100", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Mar  1 04:43:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:38 np0005634532 dazzling_golick[93553]: Option GRAFANA_API_URL updated
Mar  1 04:43:38 np0005634532 systemd[1]: libpod-6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90.scope: Deactivated successfully.
Mar  1 04:43:38 np0005634532 conmon[93553]: conmon 6bd92236c3d2143c4444 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90.scope/container/memory.events
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.831978052 +0000 UTC m=+0.468654013 container died 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6609a494e6aadaa497de237acae94a53e36d97366637499d7feefbec626e4c59-merged.mount: Deactivated successfully.
Mar  1 04:43:38 np0005634532 podman[93489]: 2026-03-01 09:43:38.870202802 +0000 UTC m=+0.506878743 container remove 6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90 (image=quay.io/ceph/ceph:v19, name=dazzling_golick, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:43:38 np0005634532 systemd[1]: libpod-conmon-6bd92236c3d2143c444479063d4126c13f1fc62b6c1691400158d56c00cb6f90.scope: Deactivated successfully.
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 python3[93940]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module disable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:39 np0005634532 podman[93968]: 2026-03-01 09:43:39.237098286 +0000 UTC m=+0.043698377 container create a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:43:39 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.17 deep-scrub starts
Mar  1 04:43:39 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.17 deep-scrub ok
Mar  1 04:43:39 np0005634532 systemd[1]: Started libpod-conmon-a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f.scope.
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e18: compute-0.ebwufc(active, since 4s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:39 np0005634532 podman[93968]: 2026-03-01 09:43:39.214179516 +0000 UTC m=+0.020779657 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:39 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:39 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d51588b2e53a4a0a373ba1b7a414d726262367b176ab2386af879d79de555a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:39 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d51588b2e53a4a0a373ba1b7a414d726262367b176ab2386af879d79de555a/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:39 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98d51588b2e53a4a0a373ba1b7a414d726262367b176ab2386af879d79de555a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:39 np0005634532 podman[93968]: 2026-03-01 09:43:39.344539335 +0000 UTC m=+0.151139446 container init a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:43:39 np0005634532 podman[93968]: 2026-03-01 09:43:39.353975509 +0000 UTC m=+0.160575600 container start a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:39 np0005634532 podman[93968]: 2026-03-01 09:43:39.358292176 +0000 UTC m=+0.164892257 container attach a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:39 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 0ceacb58-f15f-4078-bb73-748e32753268 (Updating node-exporter deployment (+2 -> 3))
Mar  1 04:43:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-1 on compute-1
Mar  1 04:43:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-1 on compute-1
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module disable", "module": "dashboard"} v 0)
Mar  1 04:43:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1966564080' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:40 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.16 scrub starts
Mar  1 04:43:40 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 4.16 scrub ok
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: from='mgr.14382 192.168.122.100:0/3410977955' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: Deploying daemon node-exporter.compute-1 on compute-1
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1966564080' entity='client.admin' cmd=[{"prefix": "mgr module disable", "module": "dashboard"}]: dispatch
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v6: 197 pgs: 1 active+clean+scrubbing, 196 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1966564080' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Mar  1 04:43:40 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e19: compute-0.ebwufc(active, since 6s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  e: '/usr/bin/ceph-mgr'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  0: '/usr/bin/ceph-mgr'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  1: '-n'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  2: 'mgr.compute-0.ebwufc'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  3: '-f'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  4: '--setuser'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  5: 'ceph'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  6: '--setgroup'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  7: 'ceph'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  8: '--default-log-to-file=false'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  9: '--default-log-to-journald=true'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  10: '--default-log-to-stderr=false'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr respawn  exe_path /proc/self/exe
Mar  1 04:43:40 np0005634532 systemd[1]: libpod-a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f.scope: Deactivated successfully.
Mar  1 04:43:40 np0005634532 podman[93968]: 2026-03-01 09:43:40.647673954 +0000 UTC m=+1.454274035 container died a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True)
Mar  1 04:43:40 np0005634532 systemd[1]: session-34.scope: Deactivated successfully.
Mar  1 04:43:40 np0005634532 systemd[1]: session-34.scope: Consumed 3.885s CPU time.
Mar  1 04:43:40 np0005634532 systemd-logind[832]: Session 34 logged out. Waiting for processes to exit.
Mar  1 04:43:40 np0005634532 systemd-logind[832]: Removed session 34.
Mar  1 04:43:40 np0005634532 systemd[1]: var-lib-containers-storage-overlay-98d51588b2e53a4a0a373ba1b7a414d726262367b176ab2386af879d79de555a-merged.mount: Deactivated successfully.
Mar  1 04:43:40 np0005634532 podman[93968]: 2026-03-01 09:43:40.696342423 +0000 UTC m=+1.502942484 container remove a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f (image=quay.io/ceph/ceph:v19, name=youthful_cray, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:43:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setuser ceph since I am not root
Mar  1 04:43:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setgroup ceph since I am not root
Mar  1 04:43:40 np0005634532 systemd[1]: libpod-conmon-a8ef7b3409a8593fab345345ed6ba97b52897ab0a6717eb994a987d17ff4d57f.scope: Deactivated successfully.
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:43:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:40.835+0000 7f0353ce4140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:40 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:43:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:40.908+0000 7f0353ce4140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:41 np0005634532 python3[94064]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --interactive  --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mgr module enable dashboard _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.071082931 +0000 UTC m=+0.044162298 container create a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:41 np0005634532 systemd[1]: Started libpod-conmon-a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d.scope.
Mar  1 04:43:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d1c2f9a50c1f40b9c87e7d141bd959b24aa56adccc80ded0dc29ddb36c64cb/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d1c2f9a50c1f40b9c87e7d141bd959b24aa56adccc80ded0dc29ddb36c64cb/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86d1c2f9a50c1f40b9c87e7d141bd959b24aa56adccc80ded0dc29ddb36c64cb/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.048111741 +0000 UTC m=+0.021191118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.182409117 +0000 UTC m=+0.155488504 container init a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.187326979 +0000 UTC m=+0.160406326 container start a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.190727743 +0000 UTC m=+0.163807120 container attach a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:41 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.14 scrub starts
Mar  1 04:43:41 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.14 scrub ok
Mar  1 04:43:41 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:43:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "dashboard"} v 0)
Mar  1 04:43:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/54068639' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Mar  1 04:43:41 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:41 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:43:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:41.666+0000 7f0353ce4140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:41 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/1966564080' entity='client.admin' cmd='[{"prefix": "mgr module disable", "module": "dashboard"}]': finished
Mar  1 04:43:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/54068639' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Mar  1 04:43:41 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e20: compute-0.ebwufc(active, since 7s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:41 np0005634532 systemd[1]: libpod-a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d.scope: Deactivated successfully.
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.751826181 +0000 UTC m=+0.724905538 container died a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-86d1c2f9a50c1f40b9c87e7d141bd959b24aa56adccc80ded0dc29ddb36c64cb-merged.mount: Deactivated successfully.
Mar  1 04:43:41 np0005634532 podman[94065]: 2026-03-01 09:43:41.786295178 +0000 UTC m=+0.759374535 container remove a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d (image=quay.io/ceph/ceph:v19, name=beautiful_jennings, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:41 np0005634532 systemd[1]: libpod-conmon-a5ba1e3be574d9ddbfd90c43065b3a792bf28a8fae8088651a0fee0f446b024d.scope: Deactivated successfully.
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:43:42 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.16 scrub starts
Mar  1 04:43:42 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.16 scrub ok
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:42.288+0000 7f0353ce4140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:42.423+0000 7f0353ce4140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:43:42 np0005634532 python3[94205]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:42.489+0000 7f0353ce4140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:43:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:42.617+0000 7f0353ce4140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:43:42 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/54068639' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "dashboard"}]: dispatch
Mar  1 04:43:42 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/54068639' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "dashboard"}]': finished
Mar  1 04:43:42 np0005634532 python3[94276]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358222.2369664-38114-5284393359326/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=b1f36629bdb347469f4890c95dfdef5abc68c3ae backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:43:42 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:43:43 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.11 scrub starts
Mar  1 04:43:43 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.11 scrub ok
Mar  1 04:43:43 np0005634532 python3[94326]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:43:43 np0005634532 podman[94327]: 2026-03-01 09:43:43.35919702 +0000 UTC m=+0.055966592 container create 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:43 np0005634532 systemd[1]: Started libpod-conmon-8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5.scope.
Mar  1 04:43:43 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30180aba8aab3a2cbb63adb09ad16d0e5cee66cede87e1f624d5434072fa266/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30180aba8aab3a2cbb63adb09ad16d0e5cee66cede87e1f624d5434072fa266/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30180aba8aab3a2cbb63adb09ad16d0e5cee66cede87e1f624d5434072fa266/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:43 np0005634532 podman[94327]: 2026-03-01 09:43:43.334784253 +0000 UTC m=+0.031553835 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:43 np0005634532 podman[94327]: 2026-03-01 09:43:43.463575363 +0000 UTC m=+0.160344895 container init 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:43 np0005634532 podman[94327]: 2026-03-01 09:43:43.472572696 +0000 UTC m=+0.169342228 container start 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:43:43 np0005634532 podman[94327]: 2026-03-01 09:43:43.476977235 +0000 UTC m=+0.173746767 container attach 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:43:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:43.540+0000 7f0353ce4140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:43:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:43.780+0000 7f0353ce4140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:43:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:43.873+0000 7f0353ce4140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:43:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:43.932+0000 7f0353ce4140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:43 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:43:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:43.999+0000 7f0353ce4140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:43:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:44.063+0000 7f0353ce4140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:43:44 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.10 deep-scrub starts
Mar  1 04:43:44 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.10 deep-scrub ok
Mar  1 04:43:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:44.392+0000 7f0353ce4140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:43:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:44.482+0000 7f0353ce4140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:43:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:44.876+0000 7f0353ce4140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:44 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:43:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e42 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:45 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.13 scrub starts
Mar  1 04:43:45 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.13 scrub ok
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.463+0000 7f0353ce4140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.529+0000 7f0353ce4140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.603+0000 7f0353ce4140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.742+0000 7f0353ce4140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:43:45 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx restarted
Mar  1 04:43:45 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx started
Mar  1 04:43:45 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e21: compute-0.ebwufc(active, since 11s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.814+0000 7f0353ce4140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:43:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:45.951+0000 7f0353ce4140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:45 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:46.145+0000 7f0353ce4140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj restarted
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj started
Mar  1 04:43:46 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.1d scrub starts
Mar  1 04:43:46 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 6.1d scrub ok
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:46.396+0000 7f0353ce4140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:46.460+0000 7f0353ce4140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x56102b813860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  e: '/usr/bin/ceph-mgr'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  0: '/usr/bin/ceph-mgr'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  1: '-n'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  2: 'mgr.compute-0.ebwufc'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  3: '-f'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  4: '--setuser'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  5: 'ceph'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  6: '--setgroup'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  7: 'ceph'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  8: '--default-log-to-file=false'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  9: '--default-log-to-journald=true'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  10: '--default-log-to-stderr=false'
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr respawn  exe_path /proc/self/exe
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e22: compute-0.ebwufc(active, starting, since 0.0359569s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setuser ceph since I am not root
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setgroup ceph since I am not root
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:46.696+0000 7f4cf929c140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:43:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:46.771+0000 7f4cf929c140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:43:46 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:46 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:43:47 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Mar  1 04:43:47 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Mar  1 04:43:47 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:43:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:47.592+0000 7f4cf929c140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:47 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:43:47 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:48.245+0000 7f4cf929c140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:43:48 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Mar  1 04:43:48 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:48.408+0000 7f4cf929c140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:48.493+0000 7f4cf929c140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:43:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:48.629+0000 7f4cf929c140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:43:48 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:43:49 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:43:49 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.521+0000 7f4cf929c140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.719+0000 7f4cf929c140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.786+0000 7f4cf929c140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.844+0000 7f4cf929c140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.914+0000 7f4cf929c140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:43:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:49.975+0000 7f4cf929c140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:43:49 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:43:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e43 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:50.272+0000 7f4cf929c140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:43:50 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Mar  1 04:43:50 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Mar  1 04:43:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:50.358+0000 7f4cf929c140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:43:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:50.758+0000 7f4cf929c140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:43:50 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:43:50 np0005634532 systemd[1]: Stopping User Manager for UID 42477...
Mar  1 04:43:50 np0005634532 systemd[77178]: Activating special unit Exit the Session...
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped target Main User Target.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped target Basic System.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped target Paths.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped target Sockets.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped target Timers.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped Daily Cleanup of User's Temporary Directories.
Mar  1 04:43:50 np0005634532 systemd[77178]: Closed D-Bus User Message Bus Socket.
Mar  1 04:43:50 np0005634532 systemd[77178]: Stopped Create User's Volatile Files and Directories.
Mar  1 04:43:50 np0005634532 systemd[77178]: Removed slice User Application Slice.
Mar  1 04:43:50 np0005634532 systemd[77178]: Reached target Shutdown.
Mar  1 04:43:50 np0005634532 systemd[77178]: Finished Exit the Session.
Mar  1 04:43:50 np0005634532 systemd[77178]: Reached target Exit the Session.
Mar  1 04:43:50 np0005634532 systemd[1]: user@42477.service: Deactivated successfully.
Mar  1 04:43:50 np0005634532 systemd[1]: Stopped User Manager for UID 42477.
Mar  1 04:43:50 np0005634532 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Mar  1 04:43:50 np0005634532 systemd[1]: run-user-42477.mount: Deactivated successfully.
Mar  1 04:43:50 np0005634532 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Mar  1 04:43:50 np0005634532 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Mar  1 04:43:50 np0005634532 systemd[1]: Removed slice User Slice of UID 42477.
Mar  1 04:43:50 np0005634532 systemd[1]: user-42477.slice: Consumed 30.149s CPU time.
Mar  1 04:43:50 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx restarted
Mar  1 04:43:50 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx started
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.307+0000 7f4cf929c140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:43:51 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Mar  1 04:43:51 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.412+0000 7f4cf929c140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.482+0000 7f4cf929c140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.610+0000 7f4cf929c140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.671+0000 7f4cf929c140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:43:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:51.807+0000 7f4cf929c140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:43:51 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:43:51 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e23: compute-0.ebwufc(active, starting, since 5s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:51 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj restarted
Mar  1 04:43:51 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj started
Mar  1 04:43:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:52.003+0000 7f4cf929c140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:43:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:52.241+0000 7f4cf929c140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:43:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:43:52.302+0000 7f4cf929c140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x565198b75860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e24: compute-0.ebwufc(active, starting, since 0.0402465s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e1 all = 1
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Manager daemon compute-0.ebwufc is now available
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:43:52
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: crash
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: dashboard
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO access_control] Loading user roles DB version=2
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO sso] Loading SSO DB version=1
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO root] Configured CherryPy, starting engine...
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: progress
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [progress INFO root] Loading...
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7f4c7d66bca0>, <progress.module.GhostEvent object at 0x7f4c7d66bcd0>, <progress.module.GhostEvent object at 0x7f4c7d66bd00>, <progress.module.GhostEvent object at 0x7f4c7d66bd30>, <progress.module.GhostEvent object at 0x7f4c7d66bd60>, <progress.module.GhostEvent object at 0x7f4c7d66bd90>, <progress.module.GhostEvent object at 0x7f4c7d66bdc0>, <progress.module.GhostEvent object at 0x7f4c7d66bdf0>, <progress.module.GhostEvent object at 0x7f4c7d66be20>, <progress.module.GhostEvent object at 0x7f4c7d66be50>, <progress.module.GhostEvent object at 0x7f4c7d66be80>] historic events
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: restful
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: status
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: vms, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: volumes, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: backups, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: images, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"} v 0)
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Mar  1 04:43:52 np0005634532 systemd-logind[832]: New session 35 of user ceph-admin.
Mar  1 04:43:52 np0005634532 systemd[1]: Created slice User Slice of UID 42477.
Mar  1 04:43:52 np0005634532 systemd[1]: Starting User Runtime Directory /run/user/42477...
Mar  1 04:43:52 np0005634532 systemd[1]: Finished User Runtime Directory /run/user/42477.
Mar  1 04:43:52 np0005634532 systemd[1]: Starting User Manager for UID 42477...
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: Active manager daemon compute-0.ebwufc restarted
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: Manager daemon compute-0.ebwufc is now available
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:43:52 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.module] Engine started.
Mar  1 04:43:53 np0005634532 systemd[94532]: Queued start job for default target Main User Target.
Mar  1 04:43:53 np0005634532 systemd[94532]: Created slice User Application Slice.
Mar  1 04:43:53 np0005634532 systemd[94532]: Started Mark boot as successful after the user session has run 2 minutes.
Mar  1 04:43:53 np0005634532 systemd[94532]: Started Daily Cleanup of User's Temporary Directories.
Mar  1 04:43:53 np0005634532 systemd[94532]: Reached target Paths.
Mar  1 04:43:53 np0005634532 systemd[94532]: Reached target Timers.
Mar  1 04:43:53 np0005634532 systemd[94532]: Starting D-Bus User Message Bus Socket...
Mar  1 04:43:53 np0005634532 systemd[94532]: Starting Create User's Volatile Files and Directories...
Mar  1 04:43:53 np0005634532 systemd[94532]: Finished Create User's Volatile Files and Directories.
Mar  1 04:43:53 np0005634532 systemd[94532]: Listening on D-Bus User Message Bus Socket.
Mar  1 04:43:53 np0005634532 systemd[94532]: Reached target Sockets.
Mar  1 04:43:53 np0005634532 systemd[94532]: Reached target Basic System.
Mar  1 04:43:53 np0005634532 systemd[94532]: Reached target Main User Target.
Mar  1 04:43:53 np0005634532 systemd[94532]: Startup finished in 113ms.
Mar  1 04:43:53 np0005634532 systemd[1]: Started User Manager for UID 42477.
Mar  1 04:43:53 np0005634532 systemd[1]: Started Session 35 of User ceph-admin.
Mar  1 04:43:53 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.1d scrub starts
Mar  1 04:43:53 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.1d scrub ok
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e25: compute-0.ebwufc(active, since 1.05695s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14457 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Mar  1 04:43:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0[75821]: 2026-03-01T09:43:53.380+0000 7f8391a07640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e2 new map
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e2 print_map#012e2#012btime 2026-03-01T09:43:53:381686+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:43:53.381630+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 compute-1 compute-2 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Mar  1 04:43:53 np0005634532 systemd[1]: libpod-8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5.scope: Deactivated successfully.
Mar  1 04:43:53 np0005634532 podman[94327]: 2026-03-01 09:43:53.43632087 +0000 UTC m=+10.133090402 container died 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:43:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b30180aba8aab3a2cbb63adb09ad16d0e5cee66cede87e1f624d5434072fa266-merged.mount: Deactivated successfully.
Mar  1 04:43:53 np0005634532 podman[94327]: 2026-03-01 09:43:53.474611972 +0000 UTC m=+10.171381514 container remove 8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5 (image=quay.io/ceph/ceph:v19, name=gracious_kalam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:43:53 np0005634532 systemd[1]: libpod-conmon-8988a37cb039ad28deb39d83680a4655148555e200644da5101a9e1b63b43cc5.scope: Deactivated successfully.
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:53] ENGINE Bus STARTING
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:53] ENGINE Bus STARTING
Mar  1 04:43:53 np0005634532 podman[94705]: 2026-03-01 09:43:53.710096591 +0000 UTC m=+0.074814189 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:53 np0005634532 python3[94713]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:53 np0005634532 podman[94705]: 2026-03-01 09:43:53.788779696 +0000 UTC m=+0.153497314 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:53] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:53] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:53 np0005634532 podman[94738]: 2026-03-01 09:43:53.830935713 +0000 UTC m=+0.060705659 container create f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:43:53 np0005634532 systemd[1]: Started libpod-conmon-f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c.scope.
Mar  1 04:43:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8908370ad51b7634ab3804e0c85bba1a8e3fe49706c3c287765e0591842e1643/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8908370ad51b7634ab3804e0c85bba1a8e3fe49706c3c287765e0591842e1643/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8908370ad51b7634ab3804e0c85bba1a8e3fe49706c3c287765e0591842e1643/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:53 np0005634532 podman[94738]: 2026-03-01 09:43:53.791128914 +0000 UTC m=+0.020898840 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:53 np0005634532 podman[94738]: 2026-03-01 09:43:53.895083256 +0000 UTC m=+0.124853192 container init f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:53 np0005634532 podman[94738]: 2026-03-01 09:43:53.901521276 +0000 UTC m=+0.131291192 container start f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:53 np0005634532 podman[94738]: 2026-03-01 09:43:53.904675155 +0000 UTC m=+0.134445081 container attach f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:53] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:53] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:53] ENGINE Bus STARTED
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:53] ENGINE Bus STARTED
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:43:53] ENGINE Client ('192.168.122.100', 38832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:53 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:43:53] ENGINE Client ('192.168.122.100', 38832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14487 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:54 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:54 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 exciting_williams[94795]: Scheduled mds.cephfs update...
Mar  1 04:43:54 np0005634532 systemd[1]: libpod-f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c.scope: Deactivated successfully.
Mar  1 04:43:54 np0005634532 podman[94738]: 2026-03-01 09:43:54.309268835 +0000 UTC m=+0.539038781 container died f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:54 np0005634532 podman[94903]: 2026-03-01 09:43:54.319607342 +0000 UTC m=+0.056667059 container exec e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:54 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Mar  1 04:43:54 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Mar  1 04:43:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8908370ad51b7634ab3804e0c85bba1a8e3fe49706c3c287765e0591842e1643-merged.mount: Deactivated successfully.
Mar  1 04:43:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v5: 197 pgs: 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:54 np0005634532 podman[94738]: 2026-03-01 09:43:54.366355073 +0000 UTC m=+0.596125029 container remove f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c (image=quay.io/ceph/ceph:v19, name=exciting_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:43:54 np0005634532 systemd[1]: libpod-conmon-f549368442be1779fa37e262cff8a3c38b6fe36b2fd91257a583f52937251e1c.scope: Deactivated successfully.
Mar  1 04:43:54 np0005634532 podman[94903]: 2026-03-01 09:43:54.37389625 +0000 UTC m=+0.110955937 container exec_died e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:54 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 04:43:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:54 np0005634532 python3[95036]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   nfs cluster create cephfs --ingress --virtual-ip=192.168.122.2/24 --ingress-mode=haproxy-protocol  '--placement=compute-0 compute-1 compute-2 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:54 np0005634532 podman[95039]: 2026-03-01 09:43:54.758566655 +0000 UTC m=+0.054134186 container create deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:43:54 np0005634532 systemd[1]: Started libpod-conmon-deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c.scope.
Mar  1 04:43:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ceefe5c342aa910179bff18f0e1910417a61a9fab507c18f4f676e7167456e4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ceefe5c342aa910179bff18f0e1910417a61a9fab507c18f4f676e7167456e4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ceefe5c342aa910179bff18f0e1910417a61a9fab507c18f4f676e7167456e4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:54 np0005634532 podman[95039]: 2026-03-01 09:43:54.730954699 +0000 UTC m=+0.026522280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:54 np0005634532 podman[95039]: 2026-03-01 09:43:54.843767101 +0000 UTC m=+0.139334662 container init deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:43:54 np0005634532 podman[95039]: 2026-03-01 09:43:54.853268447 +0000 UTC m=+0.148835988 container start deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:43:54 np0005634532 podman[95039]: 2026-03-01 09:43:54.856784105 +0000 UTC m=+0.152351646 container attach deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:53] ENGINE Bus STARTING
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:53] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:53] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:53] ENGINE Bus STARTED
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:43:53] ENGINE Client ('192.168.122.100', 38832) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: Saving service mds.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14499 -' entity='client.admin' cmd=[{"prefix": "nfs cluster create", "cluster_id": "cephfs", "ingress": true, "virtual_ip": "192.168.122.2/24", "ingress_mode": "haproxy-protocol", "placement": "compute-0 compute-1 compute-2 ", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e26: compute-0.ebwufc(active, since 3s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:55 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:43:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:55 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]: dispatch
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"} v 0)
Mar  1 04:43:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v7: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:56 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Mar  1 04:43:56 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:56 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool create", "pool": ".nfs", "yes_i_really_mean_it": true}]': finished
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]: dispatch
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e27: compute-0.ebwufc(active, since 4s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: [nfs INFO nfs.cluster] Created empty object:conf-nfs.cephfs
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:57 np0005634532 systemd[1]: libpod-deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c.scope: Deactivated successfully.
Mar  1 04:43:57 np0005634532 podman[95039]: 2026-03-01 09:43:57.311736907 +0000 UTC m=+2.607304398 container died deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:43:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7ceefe5c342aa910179bff18f0e1910417a61a9fab507c18f4f676e7167456e4-merged.mount: Deactivated successfully.
Mar  1 04:43:57 np0005634532 podman[95039]: 2026-03-01 09:43:57.345630379 +0000 UTC m=+2.641197920 container remove deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c (image=quay.io/ceph/ceph:v19, name=relaxed_engelbart, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:43:57 np0005634532 systemd[1]: libpod-conmon-deabee366b39ff5f4dada8fc04e03c53084bc11bfa91f1f3abe92f986fe86b1c.scope: Deactivated successfully.
Mar  1 04:43:57 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Mar  1 04:43:57 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:43:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev e6eb2074-d435-4130-865b-de483e8b14cf (Updating node-exporter deployment (+1 -> 3))
Mar  1 04:43:58 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon node-exporter.compute-2 on compute-2
Mar  1 04:43:58 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon node-exporter.compute-2 on compute-2
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool application enable", "pool": ".nfs", "app": "nfs"}]': finished
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Saving service nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Saving service ingress.nfs.cephfs spec with placement compute-0;compute-1;compute-2
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Mar  1 04:43:58 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Mar  1 04:43:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v10: 198 pgs: 1 unknown, 197 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:43:58 np0005634532 python3[96172]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Mar  1 04:43:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Mar  1 04:43:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Mar  1 04:43:58 np0005634532 python3[96245]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358238.073893-38165-119420516098746/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=735ad2809d0818ba20e2faa55e343c8cd4b2faa0 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:43:59 np0005634532 python3[96295]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: Deploying daemon node-exporter.compute-2 on compute-2
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.257991803 +0000 UTC m=+0.042576898 container create 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e28: compute-0.ebwufc(active, since 6s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:43:59 np0005634532 systemd[1]: Started libpod-conmon-44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b.scope.
Mar  1 04:43:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:43:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf4755917b6dc87650ccce7c8fa0d9f4c51f61742e8e95990836e6ccf5a7220/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bf4755917b6dc87650ccce7c8fa0d9f4c51f61742e8e95990836e6ccf5a7220/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.239947795 +0000 UTC m=+0.024532910 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.341899538 +0000 UTC m=+0.126484643 container init 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.349802804 +0000 UTC m=+0.134387939 container start 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.353733482 +0000 UTC m=+0.138318597 container attach 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:43:59 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Mar  1 04:43:59 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth import"} v 0)
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2963515261' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Mar  1 04:43:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2963515261' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Mar  1 04:43:59 np0005634532 systemd[1]: libpod-44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b.scope: Deactivated successfully.
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.80375682 +0000 UTC m=+0.588341905 container died 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:43:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9bf4755917b6dc87650ccce7c8fa0d9f4c51f61742e8e95990836e6ccf5a7220-merged.mount: Deactivated successfully.
Mar  1 04:43:59 np0005634532 podman[96296]: 2026-03-01 09:43:59.85045388 +0000 UTC m=+0.635038975 container remove 44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b (image=quay.io/ceph/ceph:v19, name=strange_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:43:59 np0005634532 systemd[1]: libpod-conmon-44bce764b225c9dd0db874b02eb604e5d3943c6686b84e5ad3b862000422c49b.scope: Deactivated successfully.
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2963515261' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2963515261' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:44:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v11: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 12 op/s
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Mar  1 04:44:00 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Mar  1 04:44:00 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:00 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev e6eb2074-d435-4130-865b-de483e8b14cf (Updating node-exporter deployment (+1 -> 3))
Mar  1 04:44:00 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event e6eb2074-d435-4130-865b-de483e8b14cf (Updating node-exporter deployment (+1 -> 3)) in 2 seconds
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.node-exporter}] v 0)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:00 np0005634532 python3[96399]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:00 np0005634532 podman[96425]: 2026-03-01 09:44:00.693463062 +0000 UTC m=+0.043277936 container create a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 04:44:00 np0005634532 systemd[1]: Started libpod-conmon-a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b.scope.
Mar  1 04:44:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338d6acc424b821df61890302f5e4fdb367669f3d6dbcced1ee06c848a385a8f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/338d6acc424b821df61890302f5e4fdb367669f3d6dbcced1ee06c848a385a8f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:00 np0005634532 podman[96425]: 2026-03-01 09:44:00.673812933 +0000 UTC m=+0.023627867 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:00 np0005634532 podman[96425]: 2026-03-01 09:44:00.77150674 +0000 UTC m=+0.121321644 container init a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:44:00 np0005634532 podman[96425]: 2026-03-01 09:44:00.778798931 +0000 UTC m=+0.128613845 container start a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:00 np0005634532 podman[96425]: 2026-03-01 09:44:00.786366489 +0000 UTC m=+0.136181373 container attach a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:44:00 np0005634532 podman[96487]: 2026-03-01 09:44:00.910966574 +0000 UTC m=+0.043684856 container create daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:00 np0005634532 systemd[1]: Started libpod-conmon-daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0.scope.
Mar  1 04:44:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:00 np0005634532 podman[96487]: 2026-03-01 09:44:00.893895801 +0000 UTC m=+0.026614083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:00 np0005634532 podman[96487]: 2026-03-01 09:44:00.989434334 +0000 UTC m=+0.122152616 container init daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:00 np0005634532 podman[96487]: 2026-03-01 09:44:00.996499019 +0000 UTC m=+0.129217301 container start daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:44:00 np0005634532 quirky_cerf[96518]: 167 167
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0.scope: Deactivated successfully.
Mar  1 04:44:01 np0005634532 conmon[96518]: conmon daa3f2530b27e5e1961f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0.scope/container/memory.events
Mar  1 04:44:01 np0005634532 podman[96487]: 2026-03-01 09:44:01.001802931 +0000 UTC m=+0.134521213 container attach daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:01 np0005634532 podman[96487]: 2026-03-01 09:44:01.002075858 +0000 UTC m=+0.134794140 container died daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-31061d90eec921e63165cd9222c20b8f24cd8c145709a08379763feccb4426cc-merged.mount: Deactivated successfully.
Mar  1 04:44:01 np0005634532 podman[96487]: 2026-03-01 09:44:01.048582933 +0000 UTC m=+0.181301215 container remove daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quirky_cerf, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-conmon-daa3f2530b27e5e1961f00123b9b8a97239ffe8cddf908cdd9c2e4e478af9db0.scope: Deactivated successfully.
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/807578516' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Mar  1 04:44:01 np0005634532 intelligent_jennings[96455]: 
Mar  1 04:44:01 np0005634532 intelligent_jennings[96455]: {"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":66,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":48,"num_osds":3,"num_up_osds":3,"osd_up_since":1772358197,"num_in_osds":3,"osd_in_since":1772358180,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":197},{"state_name":"unknown","count":1}],"num_pgs":198,"num_pools":12,"num_objects":194,"data_bytes":464595,"bytes_used":88776704,"bytes_avail":64323149824,"bytes_total":64411926528,"unknown_pgs_ratio":0.0050505050458014011},"fsmap":{"epoch":2,"btime":"2026-03-01T09:43:53:381686+0000","id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-03-01T09:43:36.363455+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ebwufc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.uyojxx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.dikzlj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14370":{"start_epoch":4,"start_stamp":"2026-03-01T09:43:35.381950+0000","gid":14370,"addr":"192.168.122.100:0/1811999405","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dvtuyn","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}},"24131":{"start_epoch":5,"start_stamp":"2026-03-01T09:43:35.390289+0000","gid":24131,"addr":"192.168.122.101:0/950956334","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.wbcorv","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}},"24145":{"start_epoch":5,"start_stamp":"2026-03-01T09:43:35.390269+0000","gid":24145,"addr":"192.168.122.102:0/2863278829","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.zizzzn","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"e6eb2074-d435-4130-865b-de483e8b14cf":{"message":"Updating node-exporter deployment (+1 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b.scope: Deactivated successfully.
Mar  1 04:44:01 np0005634532 podman[96425]: 2026-03-01 09:44:01.185843043 +0000 UTC m=+0.535657957 container died a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-338d6acc424b821df61890302f5e4fdb367669f3d6dbcced1ee06c848a385a8f-merged.mount: Deactivated successfully.
Mar  1 04:44:01 np0005634532 podman[96425]: 2026-03-01 09:44:01.218686219 +0000 UTC m=+0.568501103 container remove a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b (image=quay.io/ceph/ceph:v19, name=intelligent_jennings, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.222413791 +0000 UTC m=+0.056754901 container create c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-conmon-a557aee8fe32d99fc2010e9491344c510cb3982b1713f23ca966e63616683d6b.scope: Deactivated successfully.
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:44:01 np0005634532 systemd[1]: Started libpod-conmon-c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36.scope.
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.192382175 +0000 UTC m=+0.026723395 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Mar  1 04:44:01 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.344699679 +0000 UTC m=+0.179040819 container init c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.353413115 +0000 UTC m=+0.187754225 container start c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.357153628 +0000 UTC m=+0.191494828 container attach c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:01 np0005634532 python3[96602]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:01 np0005634532 podman[96605]: 2026-03-01 09:44:01.62054707 +0000 UTC m=+0.046907496 container create 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:44:01 np0005634532 systemd[1]: Started libpod-conmon-42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d.scope.
Mar  1 04:44:01 np0005634532 brave_kapitsa[96571]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:44:01 np0005634532 brave_kapitsa[96571]: --> All data devices are unavailable
Mar  1 04:44:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa97a5050f34edc24c78faeea3f0ac17f7dc501abf982df58a94e84659cad0b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6fa97a5050f34edc24c78faeea3f0ac17f7dc501abf982df58a94e84659cad0b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:01 np0005634532 podman[96605]: 2026-03-01 09:44:01.597564249 +0000 UTC m=+0.023924645 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:01 np0005634532 podman[96605]: 2026-03-01 09:44:01.696516367 +0000 UTC m=+0.122876723 container init 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:44:01 np0005634532 podman[96605]: 2026-03-01 09:44:01.706328041 +0000 UTC m=+0.132688377 container start 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:44:01 np0005634532 podman[96605]: 2026-03-01 09:44:01.710591587 +0000 UTC m=+0.136951933 container attach 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36.scope: Deactivated successfully.
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.712438763 +0000 UTC m=+0.546779873 container died c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:44:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b09676d73125cade311ccf43ca715f644ece9a4373e5f893316bc58e500fad7d-merged.mount: Deactivated successfully.
Mar  1 04:44:01 np0005634532 podman[96543]: 2026-03-01 09:44:01.752032306 +0000 UTC m=+0.586373406 container remove c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_kapitsa, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:01 np0005634532 systemd[1]: libpod-conmon-c3c4916a17482fee110c3b51c77de6b59d25fb5baf55b6119f93589765766b36.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 04:44:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1795395566' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 04:44:02 np0005634532 amazing_ishizaka[96627]: 
Mar  1 04:44:02 np0005634532 amazing_ishizaka[96627]: {"epoch":3,"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","modified":"2026-03-01T09:42:49.953069Z","created":"2026-03-01T09:40:50.920361Z","min_mon_release":19,"min_mon_release_name":"squid","election_strategy":1,"disallowed_leaders":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef","squid"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":1,"name":"compute-2","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.102:3300","nonce":0},{"type":"v1","addr":"192.168.122.102:6789","nonce":0}]},"addr":"192.168.122.102:6789/0","public_addr":"192.168.122.102:6789/0","priority":0,"weight":0,"crush_location":"{}"},{"rank":2,"name":"compute-1","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.101:3300","nonce":0},{"type":"v1","addr":"192.168.122.101:6789","nonce":0}]},"addr":"192.168.122.101:6789/0","public_addr":"192.168.122.101:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0,1,2]}
Mar  1 04:44:02 np0005634532 amazing_ishizaka[96627]: dumped monmap epoch 3
Mar  1 04:44:02 np0005634532 systemd[1]: libpod-42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 podman[96605]: 2026-03-01 09:44:02.150252708 +0000 UTC m=+0.576613064 container died 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:44:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6fa97a5050f34edc24c78faeea3f0ac17f7dc501abf982df58a94e84659cad0b-merged.mount: Deactivated successfully.
Mar  1 04:44:02 np0005634532 podman[96605]: 2026-03-01 09:44:02.210908385 +0000 UTC m=+0.637268741 container remove 42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d (image=quay.io/ceph/ceph:v19, name=amazing_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:44:02 np0005634532 systemd[1]: libpod-conmon-42afae818c4f7a3187b73a387db1ab25a75eaeca2e64e795e55f11d76bb0b75d.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.c scrub starts
Mar  1 04:44:02 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.c scrub ok
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.317612516 +0000 UTC m=+0.036083868 container create c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:02 np0005634532 systemd[1]: Started libpod-conmon-c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87.scope.
Mar  1 04:44:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v12: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 11 op/s
Mar  1 04:44:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.380059587 +0000 UTC m=+0.098530929 container init c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.387269826 +0000 UTC m=+0.105741188 container start c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:44:02 np0005634532 amazing_mirzakhani[96782]: 167 167
Mar  1 04:44:02 np0005634532 systemd[1]: libpod-c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.39105854 +0000 UTC m=+0.109529882 container attach c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.391390618 +0000 UTC m=+0.109861950 container died c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.302308885 +0000 UTC m=+0.020780227 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d4bce235518a5092e6abaa2e8e88717db6f92cf6665c0b82e2568c5aa1941d10-merged.mount: Deactivated successfully.
Mar  1 04:44:02 np0005634532 podman[96766]: 2026-03-01 09:44:02.434435168 +0000 UTC m=+0.152906520 container remove c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_mirzakhani, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:02 np0005634532 systemd[1]: libpod-conmon-c06aa2d55e1ac6a5307d78122152cded0bbce640fb30069e3513c3d589602c87.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 12 completed events
Mar  1 04:44:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.592871983 +0000 UTC m=+0.053864599 container create 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:02 np0005634532 systemd[1]: Started libpod-conmon-01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9.scope.
Mar  1 04:44:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.565603826 +0000 UTC m=+0.026596492 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26a9c59ca829f9ad2d9e6408484dfdef13d8ad8c654b28a1f860dc235c3c339/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26a9c59ca829f9ad2d9e6408484dfdef13d8ad8c654b28a1f860dc235c3c339/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26a9c59ca829f9ad2d9e6408484dfdef13d8ad8c654b28a1f860dc235c3c339/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c26a9c59ca829f9ad2d9e6408484dfdef13d8ad8c654b28a1f860dc235c3c339/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.700564238 +0000 UTC m=+0.161556854 container init 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.706438064 +0000 UTC m=+0.167430680 container start 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.714024453 +0000 UTC m=+0.175017119 container attach 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:02 np0005634532 python3[96852]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:02 np0005634532 podman[96855]: 2026-03-01 09:44:02.903828738 +0000 UTC m=+0.039915063 container create 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:44:02 np0005634532 systemd[1]: Started libpod-conmon-259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6.scope.
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]: {
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:    "0": [
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:        {
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "devices": [
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "/dev/loop3"
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            ],
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "lv_name": "ceph_lv0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "lv_size": "21470642176",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "name": "ceph_lv0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "tags": {
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.cluster_name": "ceph",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.crush_device_class": "",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.encrypted": "0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.osd_id": "0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.type": "block",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.vdo": "0",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:                "ceph.with_tpm": "0"
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            },
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "type": "block",
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:            "vg_name": "ceph_vg0"
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:        }
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]:    ]
Mar  1 04:44:02 np0005634532 condescending_mayer[96848]: }
Mar  1 04:44:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a688348077d17c5cc62db95d9b9846e7c27d20f44e33290b6232e9162151158/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a688348077d17c5cc62db95d9b9846e7c27d20f44e33290b6232e9162151158/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:02 np0005634532 podman[96855]: 2026-03-01 09:44:02.884233691 +0000 UTC m=+0.020320016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:02 np0005634532 systemd[1]: libpod-01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9.scope: Deactivated successfully.
Mar  1 04:44:02 np0005634532 podman[96808]: 2026-03-01 09:44:02.996791777 +0000 UTC m=+0.457784353 container died 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:03 np0005634532 podman[96855]: 2026-03-01 09:44:03.010931758 +0000 UTC m=+0.147018123 container init 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:44:03 np0005634532 podman[96855]: 2026-03-01 09:44:03.020063375 +0000 UTC m=+0.156149710 container start 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:03 np0005634532 podman[96855]: 2026-03-01 09:44:03.025143341 +0000 UTC m=+0.161229676 container attach 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:44:03 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c26a9c59ca829f9ad2d9e6408484dfdef13d8ad8c654b28a1f860dc235c3c339-merged.mount: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[96808]: 2026-03-01 09:44:03.052846379 +0000 UTC m=+0.513838995 container remove 01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_mayer, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:03 np0005634532 systemd[1]: libpod-conmon-01e232001b1681cfe7a08b9cdc894b44d7568d3f5287eb77a7563ea6c5ed08b9.scope: Deactivated successfully.
Mar  1 04:44:03 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:03 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Mar  1 04:44:03 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Mar  1 04:44:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0)
Mar  1 04:44:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2051627969' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Mar  1 04:44:03 np0005634532 tender_aryabhata[96874]: [client.openstack]
Mar  1 04:44:03 np0005634532 tender_aryabhata[96874]: #011key = AQCICaRpAAAAABAAf4p69LTAKFcimiZLIKWXlA==
Mar  1 04:44:03 np0005634532 tender_aryabhata[96874]: #011caps mgr = "allow *"
Mar  1 04:44:03 np0005634532 tender_aryabhata[96874]: #011caps mon = "profile rbd"
Mar  1 04:44:03 np0005634532 tender_aryabhata[96874]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Mar  1 04:44:03 np0005634532 systemd[1]: libpod-259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6.scope: Deactivated successfully.
Mar  1 04:44:03 np0005634532 conmon[96874]: conmon 259048ee9ecf2af3ecc1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6.scope/container/memory.events
Mar  1 04:44:03 np0005634532 podman[96855]: 2026-03-01 09:44:03.466394322 +0000 UTC m=+0.602480617 container died 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:03 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2a688348077d17c5cc62db95d9b9846e7c27d20f44e33290b6232e9162151158-merged.mount: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[96855]: 2026-03-01 09:44:03.501268238 +0000 UTC m=+0.637354533 container remove 259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6 (image=quay.io/ceph/ceph:v19, name=tender_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:44:03 np0005634532 systemd[1]: libpod-conmon-259048ee9ecf2af3ecc12949076853c446fddf514c12efc5930e129bf1398fc6.scope: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.659819527 +0000 UTC m=+0.041132723 container create 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default)
Mar  1 04:44:03 np0005634532 systemd[1]: Started libpod-conmon-0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da.scope.
Mar  1 04:44:03 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.731733573 +0000 UTC m=+0.113046759 container init 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.737276391 +0000 UTC m=+0.118589597 container start 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.644031345 +0000 UTC m=+0.025344531 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:03 np0005634532 sharp_wilson[97029]: 167 167
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.743115256 +0000 UTC m=+0.124428432 container attach 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:03 np0005634532 systemd[1]: libpod-0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da.scope: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.744314876 +0000 UTC m=+0.125628042 container died 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:44:03 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3b230bfcf105046af1addc732a38ce5157f91d75cea238a556c18d53c8d67907-merged.mount: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[97013]: 2026-03-01 09:44:03.781031318 +0000 UTC m=+0.162344484 container remove 0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wilson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:03 np0005634532 systemd[1]: libpod-conmon-0d2abf5fb097220e838b7f3e73ac02d2271a57250f13195a179a0ce737fef3da.scope: Deactivated successfully.
Mar  1 04:44:03 np0005634532 podman[97054]: 2026-03-01 09:44:03.933600078 +0000 UTC m=+0.036436076 container create d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:44:03 np0005634532 systemd[1]: Started libpod-conmon-d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382.scope.
Mar  1 04:44:03 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66164b895f24c63d3cc7f55d63e81b627180c1e21abccee65d09f8839697fe0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66164b895f24c63d3cc7f55d63e81b627180c1e21abccee65d09f8839697fe0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66164b895f24c63d3cc7f55d63e81b627180c1e21abccee65d09f8839697fe0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e66164b895f24c63d3cc7f55d63e81b627180c1e21abccee65d09f8839697fe0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:04 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:44:04 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:44:04 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:04.01058016 +0000 UTC m=+0.113416198 container init d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:03.918464682 +0000 UTC m=+0.021300710 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:04.016551068 +0000 UTC m=+0.119387076 container start d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:04.020165978 +0000 UTC m=+0.123002116 container attach d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: from='client.? 192.168.122.100:0/2051627969' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Mar  1 04:44:04 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Mar  1 04:44:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v13: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 9 op/s
Mar  1 04:44:04 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Mar  1 04:44:04 np0005634532 lvm[97243]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:44:04 np0005634532 lvm[97243]: VG ceph_vg0 finished
Mar  1 04:44:04 np0005634532 zen_mcnulty[97070]: {}
Mar  1 04:44:04 np0005634532 systemd[1]: libpod-d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382.scope: Deactivated successfully.
Mar  1 04:44:04 np0005634532 systemd[1]: libpod-d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382.scope: Consumed 1.026s CPU time.
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:04.751436244 +0000 UTC m=+0.854272242 container died d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:04 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e66164b895f24c63d3cc7f55d63e81b627180c1e21abccee65d09f8839697fe0-merged.mount: Deactivated successfully.
Mar  1 04:44:04 np0005634532 podman[97054]: 2026-03-01 09:44:04.798878092 +0000 UTC m=+0.901714090 container remove d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_mcnulty, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 04:44:04 np0005634532 systemd[1]: libpod-conmon-d505df90547f46d05f14d03faa2e7dea87cc79b9e0853a3ada46b950d534f382.scope: Deactivated successfully.
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:04 np0005634532 ansible-async_wrapper.py[97298]: Invoked with j751545662282 30 /home/zuul/.ansible/tmp/ansible-tmp-1772358244.4710803-38239-60955626894822/AnsiballZ_command.py _
Mar  1 04:44:04 np0005634532 ansible-async_wrapper.py[97313]: Starting module and watcher
Mar  1 04:44:04 np0005634532 ansible-async_wrapper.py[97313]: Start watching 97314 (30)
Mar  1 04:44:04 np0005634532 ansible-async_wrapper.py[97314]: Start module (97314)
Mar  1 04:44:04 np0005634532 ansible-async_wrapper.py[97298]: Return async_wrapper task started.
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:04 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 6441964c-0735-4ce6-a51d-228120a9a656 (Updating mds.cephfs deployment (+3 -> 3))
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gumopp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gumopp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gumopp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:04 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-2.gumopp on compute-2
Mar  1 04:44:04 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-2.gumopp on compute-2
Mar  1 04:44:05 np0005634532 python3[97315]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.157205752 +0000 UTC m=+0.052772291 container create f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:44:05 np0005634532 systemd[1]: Started libpod-conmon-f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21.scope.
Mar  1 04:44:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.131086384 +0000 UTC m=+0.026652973 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60169c77bf391d88a0b02be86ab95bb13e04747a23c0dc670c263ede04780688/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/60169c77bf391d88a0b02be86ab95bb13e04747a23c0dc670c263ede04780688/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.247238849 +0000 UTC m=+0.142805398 container init f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.258394146 +0000 UTC m=+0.153960655 container start f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.262421136 +0000 UTC m=+0.157987655 container attach f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:44:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gumopp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-2.gumopp", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:05 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.a scrub starts
Mar  1 04:44:05 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.a scrub ok
Mar  1 04:44:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14535 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:44:05 np0005634532 zealous_kirch[97331]: 
Mar  1 04:44:05 np0005634532 zealous_kirch[97331]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Mar  1 04:44:05 np0005634532 systemd[1]: libpod-f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21.scope: Deactivated successfully.
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.629393612 +0000 UTC m=+0.524960141 container died f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:44:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-60169c77bf391d88a0b02be86ab95bb13e04747a23c0dc670c263ede04780688-merged.mount: Deactivated successfully.
Mar  1 04:44:05 np0005634532 podman[97316]: 2026-03-01 09:44:05.663308944 +0000 UTC m=+0.558875463 container remove f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21 (image=quay.io/ceph/ceph:v19, name=zealous_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:44:05 np0005634532 systemd[1]: libpod-conmon-f702c2999245ff38689eb974cbebe98f41a938241e0deaf3f7b4f64b978ead21.scope: Deactivated successfully.
Mar  1 04:44:05 np0005634532 ansible-async_wrapper.py[97314]: Module complete (97314)
Mar  1 04:44:06 np0005634532 python3[97417]: ansible-ansible.legacy.async_status Invoked with jid=j751545662282.97298 mode=status _async_dir=/root/.ansible_async
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: Deploying daemon mds.cephfs.compute-2.gumopp on compute-2
Mar  1 04:44:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v14: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 0 B/s wr, 7 op/s
Mar  1 04:44:06 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.e scrub starts
Mar  1 04:44:06 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.e scrub ok
Mar  1 04:44:06 np0005634532 python3[97466]: ansible-ansible.legacy.async_status Invoked with jid=j751545662282.97298 mode=cleanup _async_dir=/root/.ansible_async
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qvzeqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qvzeqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qvzeqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:06 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.qvzeqa on compute-0
Mar  1 04:44:06 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.qvzeqa on compute-0
Mar  1 04:44:07 np0005634532 python3[97542]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.124974563 +0000 UTC m=+0.044515426 container create a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:07 np0005634532 systemd[1]: Started libpod-conmon-a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab.scope.
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.185058146 +0000 UTC m=+0.037561894 container create 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:44:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f217b68c4b657822c062f2f0acad12458ed9a024d56c65d2a157fd3547118/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a6f217b68c4b657822c062f2f0acad12458ed9a024d56c65d2a157fd3547118/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.1095295 +0000 UTC m=+0.029070383 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:07 np0005634532 systemd[1]: Started libpod-conmon-73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021.scope.
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.211644046 +0000 UTC m=+0.131184949 container init a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.21620947 +0000 UTC m=+0.135750363 container start a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.219814739 +0000 UTC m=+0.139355602 container attach a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:44:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.243078417 +0000 UTC m=+0.095582195 container init 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.249674421 +0000 UTC m=+0.102178169 container start 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:44:07 np0005634532 eager_jones[97615]: 167 167
Mar  1 04:44:07 np0005634532 systemd[1]: libpod-73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021.scope: Deactivated successfully.
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.255594578 +0000 UTC m=+0.108098426 container attach 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.255984748 +0000 UTC m=+0.108488526 container died 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.167818248 +0000 UTC m=+0.020322016 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-cfcdbfa7045566eed521cf3f95f6ad446d6c2469f59903706d6f5b99a650c883-merged.mount: Deactivated successfully.
Mar  1 04:44:07 np0005634532 podman[97594]: 2026-03-01 09:44:07.293040518 +0000 UTC m=+0.145544306 container remove 73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=eager_jones, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:07 np0005634532 systemd[1]: libpod-conmon-73ebc94f3f08c82bcd2ae57df83440128df2d83e2203bd07c7ea2f5d56cc7021.scope: Deactivated successfully.
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qvzeqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.qvzeqa", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:07 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e3 new map
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e3 print_map#012e3#012btime 2026-03-01T09:44:07:323535+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:43:53.381630+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-2.gumopp{-1:24184} state up:standby seq 1 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] up:boot
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] as mds.0
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gumopp assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Cluster is now healthy
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.gumopp"} v 0)
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gumopp"}]: dispatch
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e3 all = 0
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e4 new map
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e4 print_map#012e4#012btime 2026-03-01T09:44:07:367028+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:07.366990+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 0 members: #012[mds.cephfs.compute-2.gumopp{0:24184} state up:creating seq 1 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:creating}
Mar  1 04:44:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-2.gumopp is now active in filesystem cephfs as rank 0
Mar  1 04:44:07 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:07 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:07 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Mar  1 04:44:07 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Mar  1 04:44:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14541 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:44:07 np0005634532 unruffled_satoshi[97610]: 
Mar  1 04:44:07 np0005634532 unruffled_satoshi[97610]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Mar  1 04:44:07 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.660843065 +0000 UTC m=+0.580383918 container died a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:07 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:07 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:07 np0005634532 systemd[1]: libpod-a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab.scope: Deactivated successfully.
Mar  1 04:44:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3a6f217b68c4b657822c062f2f0acad12458ed9a024d56c65d2a157fd3547118-merged.mount: Deactivated successfully.
Mar  1 04:44:07 np0005634532 podman[97558]: 2026-03-01 09:44:07.89535929 +0000 UTC m=+0.814900153 container remove a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab (image=quay.io/ceph/ceph:v19, name=unruffled_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:44:07 np0005634532 systemd[1]: Starting Ceph mds.cephfs.compute-0.qvzeqa for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:07 np0005634532 systemd[1]: libpod-conmon-a86130bc34051d8cc489856a5a6456d31ba79075e752d8302ae683a7864631ab.scope: Deactivated successfully.
Mar  1 04:44:08 np0005634532 podman[97806]: 2026-03-01 09:44:08.09664752 +0000 UTC m=+0.041051080 container create 407fe19a43b85dd79359e9b57d7fbd975322fe549e675f44d48a233fb97a31de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mds-cephfs-compute-0-qvzeqa, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17607480ee16dc7f64f625489469748203d01b40dd7efc7b9d8a3460d4f6623b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17607480ee16dc7f64f625489469748203d01b40dd7efc7b9d8a3460d4f6623b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17607480ee16dc7f64f625489469748203d01b40dd7efc7b9d8a3460d4f6623b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17607480ee16dc7f64f625489469748203d01b40dd7efc7b9d8a3460d4f6623b/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.qvzeqa supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 podman[97806]: 2026-03-01 09:44:08.161604204 +0000 UTC m=+0.106007784 container init 407fe19a43b85dd79359e9b57d7fbd975322fe549e675f44d48a233fb97a31de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mds-cephfs-compute-0-qvzeqa, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:08 np0005634532 podman[97806]: 2026-03-01 09:44:08.167911761 +0000 UTC m=+0.112315321 container start 407fe19a43b85dd79359e9b57d7fbd975322fe549e675f44d48a233fb97a31de (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mds-cephfs-compute-0-qvzeqa, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 04:44:08 np0005634532 bash[97806]: 407fe19a43b85dd79359e9b57d7fbd975322fe549e675f44d48a233fb97a31de
Mar  1 04:44:08 np0005634532 podman[97806]: 2026-03-01 09:44:08.07770475 +0000 UTC m=+0.022108390 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:08 np0005634532 systemd[1]: Started Ceph mds.cephfs.compute-0.qvzeqa for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: set uid:gid to 167:167 (ceph:ceph)
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mds, pid 2
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: main not setting numa affinity
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: pidfile_write: ignore empty --pid-file
Mar  1 04:44:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mds-cephfs-compute-0-qvzeqa[97821]: starting mds.cephfs.compute-0.qvzeqa at 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Updating MDS map to version 4 from mon.0
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.okjbfn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.okjbfn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.okjbfn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:08 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-1.okjbfn on compute-1
Mar  1 04:44:08 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-1.okjbfn on compute-1
Mar  1 04:44:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v15: 198 pgs: 198 active+clean; 454 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 7 op/s
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: Deploying daemon mds.cephfs.compute-0.qvzeqa on compute-0
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: daemon mds.cephfs.compute-2.gumopp assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: Cluster is now healthy
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: daemon mds.cephfs.compute-2.gumopp is now active in filesystem cephfs as rank 0
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.okjbfn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-1.okjbfn", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e5 new map
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e5 print_map#012e5#012btime 2026-03-01T09:44:08:376102+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:08.376099+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012qdb_cluster#011leader: 24184 members: 24184#012[mds.cephfs.compute-2.gumopp{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qvzeqa{-1:14547} state up:standby seq 1 addr [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Updating MDS map to version 5 from mon.0
Mar  1 04:44:08 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Monitors have assigned me to become a standby
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] up:active
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] up:boot
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:active} 1 up:standby
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qvzeqa"} v 0)
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qvzeqa"}]: dispatch
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e5 all = 0
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e6 new map
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e6 print_map#012e6#012btime 2026-03-01T09:44:08:387142+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:08.376099+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24184 members: 24184#012[mds.cephfs.compute-2.gumopp{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qvzeqa{-1:14547} state up:standby seq 1 addr [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:active} 1 up:standby
Mar  1 04:44:08 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Mar  1 04:44:08 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Mar  1 04:44:08 np0005634532 python3[97870]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:08 np0005634532 podman[97871]: 2026-03-01 09:44:08.799386636 +0000 UTC m=+0.050816584 container create 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 04:44:08 np0005634532 systemd[1]: Started libpod-conmon-883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624.scope.
Mar  1 04:44:08 np0005634532 podman[97871]: 2026-03-01 09:44:08.772882267 +0000 UTC m=+0.024312215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce2887b37eaabff0feba2e1e2faa6c68f55c1a4c3e0826f54fc33ec94615c75/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ce2887b37eaabff0feba2e1e2faa6c68f55c1a4c3e0826f54fc33ec94615c75/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:08 np0005634532 podman[97871]: 2026-03-01 09:44:08.911969482 +0000 UTC m=+0.163399470 container init 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Mar  1 04:44:08 np0005634532 podman[97871]: 2026-03-01 09:44:08.918659539 +0000 UTC m=+0.170089447 container start 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:08 np0005634532 podman[97871]: 2026-03-01 09:44:08.922850113 +0000 UTC m=+0.174280041 container attach 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:44:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14553 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:44:09 np0005634532 ecstatic_aryabhata[97886]: 
Mar  1 04:44:09 np0005634532 ecstatic_aryabhata[97886]: [{"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "alertmanager", "service_type": "alertmanager"}, {"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "grafana", "service_type": "grafana", "spec": {"anonymous_access": true, "protocol": "https"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "nfs.cephfs", "service_name": "ingress.nfs.cephfs", "service_type": "ingress", "spec": {"backend_service": "nfs.cephfs", "enable_haproxy_protocol": true, "first_virtual_router_id": 50, "frontend_port": 2049, "monitor_port": 9049, "virtual_ip": "192.168.122.2/24"}}, {"placement": {"count": 2}, "service_id": "rgw.default", "service_name": "ingress.rgw.default", "service_type": "ingress", "spec": {"backend_service": "rgw.rgw", "first_virtual_router_id": 50, "frontend_port": 8080, "monitor_port": 8999, "virtual_interface_networks": ["192.168.122.0/24"], "virtual_ip": "192.168.122.2/24"}}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "cephfs", "service_name": "nfs.cephfs", "service_type": "nfs", "spec": {"enable_haproxy_protocol": true, "port": 12049}}, {"placement": {"host_pattern": "*"}, "service_name": "node-exporter", "service_type": "node-exporter"}, {"placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"count": 1, "hosts": ["compute-0"]}, "service_name": "prometheus", "service_type": "prometheus"}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0", "compute-1", "compute-2"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Mar  1 04:44:09 np0005634532 systemd[1]: libpod-883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624.scope: Deactivated successfully.
Mar  1 04:44:09 np0005634532 podman[97871]: 2026-03-01 09:44:09.312178914 +0000 UTC m=+0.563608832 container died 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2ce2887b37eaabff0feba2e1e2faa6c68f55c1a4c3e0826f54fc33ec94615c75-merged.mount: Deactivated successfully.
Mar  1 04:44:09 np0005634532 podman[97871]: 2026-03-01 09:44:09.35429675 +0000 UTC m=+0.605726658 container remove 883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624 (image=quay.io/ceph/ceph:v19, name=ecstatic_aryabhata, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:44:09 np0005634532 systemd[1]: libpod-conmon-883ffdf0faad9addaac2d97394297d268d0fe1fa56858a4675ea24c9065fb624.scope: Deactivated successfully.
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: Deploying daemon mds.cephfs.compute-1.okjbfn on compute-1
Mar  1 04:44:09 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Mar  1 04:44:09 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:44:09 np0005634532 ansible-async_wrapper.py[97313]: Done in kid B.
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:09 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 6441964c-0735-4ce6-a51d-228120a9a656 (Updating mds.cephfs deployment (+3 -> 3))
Mar  1 04:44:09 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 6441964c-0735-4ce6-a51d-228120a9a656 (Updating mds.cephfs deployment (+3 -> 3)) in 5 seconds
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0)
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 97066161-7796-40df-860c-a57d5412a9b4 (Updating nfs.cephfs deployment (+3 -> 3))
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.sniivf
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.sniivf
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Ensuring nfs.cephfs.0 is in the ganesha grace table
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.0 is in the ganesha grace table
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:10 np0005634532 python3[97964]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.0.0.compute-1.sniivf-rgw
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.0.0.compute-1.sniivf-rgw
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.0.0.compute-1.sniivf's ganesha conf is defaulting to empty
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.0.0.compute-1.sniivf's ganesha conf is defaulting to empty
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.0.0.compute-1.sniivf on compute-1
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.0.0.compute-1.sniivf on compute-1
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.346454476 +0000 UTC m=+0.051001408 container create 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v16: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 15 KiB/s rd, 1.2 KiB/s wr, 9 op/s
Mar  1 04:44:10 np0005634532 systemd[1]: Started libpod-conmon-53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd.scope.
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e7 new map
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e7 print_map#012e7#012btime 2026-03-01T09:44:10:392076+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:08.376099+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24184 members: 24184#012[mds.cephfs.compute-2.gumopp{0:24184} state up:active seq 2 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qvzeqa{-1:14547} state up:standby seq 1 addr [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.okjbfn{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/1043839304,v1:192.168.122.101:6805/1043839304] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1043839304,v1:192.168.122.101:6805/1043839304] up:boot
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:active} 2 up:standby
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.okjbfn"} v 0)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.okjbfn"}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e7 all = 0
Mar  1 04:44:10 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.321730702 +0000 UTC m=+0.026277664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fe15f1dfc0e0ea53c01b3c81cec0e59d12fa65918cb9596c4be1370c85496a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.0.0.compute-1.sniivf-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54fe15f1dfc0e0ea53c01b3c81cec0e59d12fa65918cb9596c4be1370c85496a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.440937113 +0000 UTC m=+0.145484085 container init 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.448697796 +0000 UTC m=+0.153244718 container start 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.452455619 +0000 UTC m=+0.157002541 container attach 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:44:10 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.17 scrub starts
Mar  1 04:44:10 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.17 scrub ok
Mar  1 04:44:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.14574 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Mar  1 04:44:10 np0005634532 trusting_blackwell[98000]: 
Mar  1 04:44:10 np0005634532 trusting_blackwell[98000]: [{"container_id": "98fa546a64cd", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.11%", "created": "2026-03-01T09:41:31.707748Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-03-01T09:43:54.425541Z", "memory_usage": 7786725, "ports": [], "service_name": "crash", "started": "2026-03-01T09:41:31.613846Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@crash.compute-0", "version": "19.2.3"}, {"container_id": "c2c907287cf6", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.44%", "created": "2026-03-01T09:42:06.417776Z", "daemon_id": "compute-1", "daemon_name": "crash.compute-1", "daemon_type": "crash", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-03-01T09:43:54.489710Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2026-03-01T09:42:06.321740Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@crash.compute-1", "version": "19.2.3"}, {"container_id": "b4b14bb1a7a3", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "0.38%", "created": "2026-03-01T09:42:58.501535Z", "daemon_id": "compute-2", "daemon_name": "crash.compute-2", "daemon_type": "crash", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-03-01T09:43:54.127377Z", "memory_usage": 7812939, "ports": [], "service_name": "crash", "started": "2026-03-01T09:42:58.387639Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@crash.compute-2", "version": "19.2.3"}, {"daemon_id": "cephfs.compute-0.qvzeqa", "daemon_name": "mds.cephfs.compute-0.qvzeqa", "daemon_type": "mds", "events": ["2026-03-01T09:44:08.264599Z daemon:mds.cephfs.compute-0.qvzeqa [INFO] \"Deployed mds.cephfs.compute-0.qvzeqa on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-1.okjbfn", "daemon_name": "mds.cephfs.compute-1.okjbfn", "daemon_type": "mds", "events": ["2026-03-01T09:44:09.965594Z daemon:mds.cephfs.compute-1.okjbfn [INFO] \"Deployed mds.cephfs.compute-1.okjbfn on host 'compute-1'\""], "hostname": "compute-1", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"daemon_id": "cephfs.compute-2.gumopp", "daemon_name": "mds.cephfs.compute-2.gumopp", "daemon_type": "mds", "events": ["2026-03-01T09:44:06.650957Z daemon:mds.cephfs.compute-2.gumopp [INFO] \"Deployed mds.cephfs.compute-2.gumopp on host 'compute-2'\""], "hostname": "compute-2", "is_active": false, "ports": [], "service_name": "mds.cephfs", "status": 2, "status_desc": "starting"}, {"container_id": "676788cabaab", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "28.67%", "created": "2026-03-01T09:40:56.897892Z", "daemon_id": "compute-0.ebwufc", "daemon_name": "mgr.compute-0.ebwufc", "daemon_type": "mgr", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-03-01T09:43:54.425468Z", "memory_usage": 541694361, "ports": [9283, 8765], "service_name": "mgr", "started": "2026-03-01T09:40:56.790521Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@mgr.compute-0.ebwufc", "version": "19.2.3"}, {"container_id": "605cb4b03447", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.78%", "created": "2026-03-01T09:42:56.710232Z", "daemon_id": "compute-1.uyojxx", "daemon_name": "mgr.compute-1.uyojxx", "daemon_type": "mgr", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-03-01T09:43:54.490338Z", "memory_usage": 505623347, "ports": [8765], "service_name": "mgr", "started": "2026-03-01T09:42:56.605015Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@mgr.compute-1.uyojxx", "version": "19.2.3"}, {"container_id": "b0f08453444e", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "36.01%", "created": "2026-03-01T09:42:50.825301Z", "daemon_id": "compute-2.dikzlj", "daemon_name": "mgr.compute-2.dikzlj", "daemon_type": "mgr", "hostname": "compute-2", "is_active": false, "last_refresh": "2026-03-01T09:43:54.127279Z", "memory_usage": 504469913, "ports": [8765], "service_name": "mgr", "started": "2026-03-01T09:42:50.721680Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@mgr.compute-2.dikzlj", "version": "19.2.3"}, {"container_id": "6664049ace04", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "quay.io/ceph/ceph@sha256:af0c5903e901e329adabe219dfc8d0c3efc1f05102a753902f33ee16c26b6cee"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph:v19", "cpu_percentage": "3.03%", "created": "2026-03-01T09:40:52.882793Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "hostname": "compute-0", "is_active": false, "last_refresh": "2026-03-01T09:43:54.425362Z", "memory_request": 2147483648, "memory_usage": 61278781, "ports": [], "service_name": "mon", "started": "2026-03-01T09:40:54.958466Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@mon.compute-0", "version": "19.2.3"}, {"container_id": "6b94b9d2a2ba", "container_image_digests": ["quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec"], "container_image_id": "aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c", "container_image_name": "quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec", "cpu_percentage": "2.32%", "created": "2026-03-01T09:42:45.821943Z", "daemon_id": "compute-1", "daemon_name": "mon.compute-1", "daemon_type": "mon", "hostname": "compute-1", "is_active": false, "last_refresh": "2026-03-01T09:43:54.490162Z", "memory_request": 2147483648, "memory_usage": 50834964, "ports": [], "service_name": "mon", "started": "2026-03-01T09:42:45.696528Z", "status": 1, "status_des
Mar  1 04:44:10 np0005634532 trusting_blackwell[98000]: : "2026-03-01T09:43:23.902215Z", "daemon_id": "rgw.compute-2.zizzzn", "daemon_name": "rgw.rgw.compute-2.zizzzn", "daemon_type": "rgw", "hostname": "compute-2", "ip": "192.168.122.102", "is_active": false, "last_refresh": "2026-03-01T09:43:54.127565Z", "memory_usage": 101386813, "ports": [8082], "service_name": "rgw.rgw", "started": "2026-03-01T09:43:23.817919Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@rgw.rgw.compute-2.zizzzn", "version": "19.2.3"}]
Mar  1 04:44:10 np0005634532 systemd[1]: libpod-53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd.scope: Deactivated successfully.
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.840191381 +0000 UTC m=+0.544738263 container died 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:44:10 np0005634532 systemd[1]: var-lib-containers-storage-overlay-54fe15f1dfc0e0ea53c01b3c81cec0e59d12fa65918cb9596c4be1370c85496a-merged.mount: Deactivated successfully.
Mar  1 04:44:10 np0005634532 podman[97985]: 2026-03-01 09:44:10.880047121 +0000 UTC m=+0.584594013 container remove 53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd (image=quay.io/ceph/ceph:v19, name=trusting_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:44:10 np0005634532 systemd[1]: libpod-conmon-53cf566afb9186a9283431feba315bd6da22019db3b640fbfc8a01d97b2053cd.scope: Deactivated successfully.
Mar  1 04:44:11 np0005634532 rsyslogd[1019]: message too long (16383) with configured size 8096, begin of message is: [{"container_id": "98fa546a64cd", "container_image_digests": ["quay.io/ceph/ceph [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.0.0.compute-1.sniivf
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Ensuring nfs.cephfs.0 is in the ganesha grace table
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.0.0.compute-1.sniivf-rgw
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Bind address in nfs.cephfs.0.0.compute-1.sniivf's ganesha conf is defaulting to empty
Mar  1 04:44:11 np0005634532 ceph-mon[75825]: Deploying daemon nfs.cephfs.0.0.compute-1.sniivf on compute-1
Mar  1 04:44:11 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.17 scrub starts
Mar  1 04:44:11 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.17 scrub ok
Mar  1 04:44:11 np0005634532 python3[98063]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:11 np0005634532 podman[98064]: 2026-03-01 09:44:11.916519408 +0000 UTC m=+0.045745457 container create 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 04:44:11 np0005634532 systemd[1]: Started libpod-conmon-426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d.scope.
Mar  1 04:44:11 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30263dc56faf242fb2eb81ca1f92a68880bdd6485fbc3fccf02ad58a9ee2fc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d30263dc56faf242fb2eb81ca1f92a68880bdd6485fbc3fccf02ad58a9ee2fc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:11 np0005634532 podman[98064]: 2026-03-01 09:44:11.89687832 +0000 UTC m=+0.026104399 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:11 np0005634532 podman[98064]: 2026-03-01 09:44:11.992199748 +0000 UTC m=+0.121425877 container init 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:11 np0005634532 podman[98064]: 2026-03-01 09:44:11.999021217 +0000 UTC m=+0.128247256 container start 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:12 np0005634532 podman[98064]: 2026-03-01 09:44:12.00234571 +0000 UTC m=+0.131571829 container attach 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Ensuring nfs.cephfs.1 is in the ganesha grace table
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.1 is in the ganesha grace table
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e8 new map
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e8 print_map#012e8#012btime 2026-03-01T09:44:12:305353+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:11.402463+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24184 members: 24184#012[mds.cephfs.compute-2.gumopp{0:24184} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qvzeqa{-1:14547} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.okjbfn{-1:24173} state up:standby seq 1 addr [v2:192.168.122.101:6804/1043839304,v1:192.168.122.101:6805/1043839304] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:12 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Updating MDS map to version 8 from mon.0
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] up:active
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] up:standby
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:active} 2 up:standby
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v17: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s wr, 3 op/s
Mar  1 04:44:12 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Mar  1 04:44:12 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:12 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 13 completed events
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json"} v 0)
Mar  1 04:44:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963643612' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Mar  1 04:44:12 np0005634532 frosty_bartik[98080]: 
Mar  1 04:44:12 np0005634532 frosty_bartik[98080]: {"fsid":"437b1e74-f995-5d64-af1d-257ce01d77ab","health":{"status":"HEALTH_OK","checks":{},"mutes":[]},"election_epoch":14,"quorum":[0,1,2],"quorum_names":["compute-0","compute-2","compute-1"],"quorum_age":77,"monmap":{"epoch":3,"min_mon_release_name":"squid","num_mons":3},"osdmap":{"epoch":48,"num_osds":3,"num_up_osds":3,"osd_up_since":1772358197,"num_in_osds":3,"osd_in_since":1772358180,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":198}],"num_pgs":198,"num_pools":12,"num_objects":216,"data_bytes":467025,"bytes_used":88944640,"bytes_avail":64322981888,"bytes_total":64411926528,"read_bytes_sec":15014,"write_bytes_sec":1194,"read_op_per_sec":4,"write_op_per_sec":4},"fsmap":{"epoch":8,"btime":"2026-03-01T09:44:12:305353+0000","id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-2.gumopp","status":"up:active","gid":24184}],"up:standby":2},"mgrmap":{"available":true,"num_standbys":2,"modules":["cephadm","dashboard","iostat","nfs","restful"],"services":{"dashboard":"http://192.168.122.100:8443/"}},"servicemap":{"epoch":5,"modified":"2026-03-01T09:43:36.363455+0000","services":{"mgr":{"daemons":{"summary":"","compute-0.ebwufc":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1.uyojxx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2.dikzlj":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"mon":{"daemons":{"summary":"","compute-0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"compute-2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"rgw":{"daemons":{"summary":"","14370":{"start_epoch":4,"start_stamp":"2026-03-01T09:43:35.381950+0000","gid":14370,"addr":"192.168.122.100:0/1811999405","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-0","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.100:8082","frontend_type#0":"beast","hostname":"compute-0","id":"rgw.compute-0.dvtuyn","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}},"24131":{"start_epoch":5,"start_stamp":"2026-03-01T09:43:35.390289+0000","gid":24131,"addr":"192.168.122.101:0/950956334","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-1","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.101:8082","frontend_type#0":"beast","hostname":"compute-1","id":"rgw.compute-1.wbcorv","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}},"24145":{"start_epoch":5,"start_stamp":"2026-03-01T09:43:35.390269+0000","gid":24145,"addr":"192.168.122.102:0/2863278829","metadata":{"arch":"x86_64","ceph_release":"squid","ceph_version":"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)","ceph_version_short":"19.2.3","container_hostname":"compute-2","container_image":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","cpu":"AMD EPYC-Rome Processor","distro":"centos","distro_description":"CentOS Stream 9","distro_version":"9","frontend_config#0":"beast endpoint=192.168.122.102:8082","frontend_type#0":"beast","hostname":"compute-2","id":"rgw.compute-2.zizzzn","kernel_description":"#1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026","kernel_version":"5.14.0-686.el9.x86_64","mem_swap_kb":"1048572","mem_total_kb":"7864280","num_handles":"1","os":"Linux","pid":"2","realm_id":"","realm_name":"","zone_id":"cd5c293a-4523-4a5e-898c-09aafdf3802f","zone_name":"default","zonegroup_id":"488aad47-6726-4ab2-b81e-4590056a15ff","zonegroup_name":"default"},"task_status":{}}}}}},"progress_events":{"97066161-7796-40df-860c-a57d5412a9b4":{"message":"Updating nfs.cephfs deployment (+3 -> 3) (0s)\n      [............................] ","progress":0,"add_to_ceph_s":true}}}
Mar  1 04:44:12 np0005634532 systemd[1]: libpod-426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d.scope: Deactivated successfully.
Mar  1 04:44:12 np0005634532 conmon[98080]: conmon 426d3adefa7353d49833 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d.scope/container/memory.events
Mar  1 04:44:12 np0005634532 podman[98064]: 2026-03-01 09:44:12.515888606 +0000 UTC m=+0.645114675 container died 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:44:12 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1d30263dc56faf242fb2eb81ca1f92a68880bdd6485fbc3fccf02ad58a9ee2fc-merged.mount: Deactivated successfully.
Mar  1 04:44:12 np0005634532 podman[98064]: 2026-03-01 09:44:12.559768456 +0000 UTC m=+0.688994505 container remove 426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d (image=quay.io/ceph/ceph:v19, name=frosty_bartik, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:44:12 np0005634532 systemd[1]: libpod-conmon-426d3adefa7353d498333e511e7106bf167de39400325e452823d7eff896b26d.scope: Deactivated successfully.
Mar  1 04:44:13 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Mar  1 04:44:13 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Mar  1 04:44:13 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk
Mar  1 04:44:13 np0005634532 ceph-mon[75825]: Ensuring nfs.cephfs.1 is in the ganesha grace table
Mar  1 04:44:13 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:13 np0005634532 python3[98158]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:13 np0005634532 podman[98159]: 2026-03-01 09:44:13.595847973 +0000 UTC m=+0.058232338 container create 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:44:13 np0005634532 systemd[1]: Started libpod-conmon-25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278.scope.
Mar  1 04:44:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e1ee0b072f6b503366926a58aa67a266443f952e73fbc717fb3e4752a9eb98/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68e1ee0b072f6b503366926a58aa67a266443f952e73fbc717fb3e4752a9eb98/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:13 np0005634532 podman[98159]: 2026-03-01 09:44:13.573321503 +0000 UTC m=+0.035705868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:13 np0005634532 podman[98159]: 2026-03-01 09:44:13.679922031 +0000 UTC m=+0.142306396 container init 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:13 np0005634532 podman[98159]: 2026-03-01 09:44:13.684764152 +0000 UTC m=+0.147148527 container start 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:44:13 np0005634532 podman[98159]: 2026-03-01 09:44:13.688572546 +0000 UTC m=+0.150956911 container attach 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0)
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492823644' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Mar  1 04:44:14 np0005634532 heuristic_bell[98175]: 
Mar  1 04:44:14 np0005634532 systemd[1]: libpod-25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278.scope: Deactivated successfully.
Mar  1 04:44:14 np0005634532 heuristic_bell[98175]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"7","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ALERTMANAGER_API_HOST","value":"http://192.168.122.100:9093","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_PASSWORD","value":"/home/grafana_password.yml","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_URL","value":"http://192.168.122.100:3100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/GRAFANA_API_USERNAME","value":"admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/PROMETHEUS_API_HOST","value":"http://192.168.122.100:9092","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-0.ebwufc/server_addr","value":"192.168.122.100","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-1.uyojxx/server_addr","value":"192.168.122.101","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/compute-2.dikzlj/server_addr","value":"192.168.122.102","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/dashboard/ssl_server_port","value":"8443","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.dvtuyn","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-1.wbcorv","name":"rgw_frontends","value":"beast endpoint=192.168.122.101:8082","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"client.rgw.rgw.compute-2.zizzzn","name":"rgw_frontends","value":"beast endpoint=192.168.122.102:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Mar  1 04:44:14 np0005634532 podman[98159]: 2026-03-01 09:44:14.034225242 +0000 UTC m=+0.496609597 container died 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:44:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-68e1ee0b072f6b503366926a58aa67a266443f952e73fbc717fb3e4752a9eb98-merged.mount: Deactivated successfully.
Mar  1 04:44:14 np0005634532 podman[98159]: 2026-03-01 09:44:14.06956744 +0000 UTC m=+0.531951775 container remove 25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278 (image=quay.io/ceph/ceph:v19, name=heuristic_bell, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:44:14 np0005634532 systemd[1]: libpod-conmon-25195885b8406693c9b137c83ddb73ca7b64b57c57aaa8edbddf6bfdada06278.scope: Deactivated successfully.
Mar  1 04:44:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v18: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Mar  1 04:44:14 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Mar  1 04:44:14 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 new map
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 print_map#012e9#012btime 2026-03-01T09:44:14:490390+0000#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0118#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112026-03-01T09:43:53.381630+0000#012modified#0112026-03-01T09:44:11.402463+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2,11=minor log segments,12=quiesce subvolumes}#012max_mds#0111#012in#0110#012up#011{0=24184}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0111#012qdb_cluster#011leader: 24184 members: 24184#012[mds.cephfs.compute-2.gumopp{0:24184} state up:active seq 3 join_fscid=1 addr [v2:192.168.122.102:6804/1831383419,v1:192.168.122.102:6805/1831383419] compat {c=[1],r=[1],i=[1fff]}]#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.qvzeqa{-1:14547} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.100:6806/2292967636,v1:192.168.122.100:6807/2292967636] compat {c=[1],r=[1],i=[1fff]}]#012[mds.cephfs.compute-1.okjbfn{-1:24173} state up:standby seq 2 join_fscid=1 addr [v2:192.168.122.101:6804/1043839304,v1:192.168.122.101:6805/1043839304] compat {c=[1],r=[1],i=[1fff]}]
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.101:6804/1043839304,v1:192.168.122.101:6805/1043839304] up:standby
Mar  1 04:44:14 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-2.gumopp=up:active} 2 up:standby
Mar  1 04:44:14 np0005634532 python3[98237]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.030159032 +0000 UTC m=+0.044475766 container create 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:44:15 np0005634532 systemd[1]: Started libpod-conmon-54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e.scope.
Mar  1 04:44:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df33248cef3796afd1c950b0ccf403855cafc00ee24ddb84d2dec3bb8fe519/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/03df33248cef3796afd1c950b0ccf403855cafc00ee24ddb84d2dec3bb8fe519/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.106314734 +0000 UTC m=+0.120631488 container init 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.013352675 +0000 UTC m=+0.027669429 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.111345589 +0000 UTC m=+0.125662323 container start 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.114574039 +0000 UTC m=+0.128890813 container attach 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.1.0.compute-2.dqiiuk's ganesha conf is defaulting to empty
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.1.0.compute-2.dqiiuk's ganesha conf is defaulting to empty
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.1.0.compute-2.dqiiuk on compute-2
Mar  1 04:44:15 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.1.0.compute-2.dqiiuk on compute-2
Mar  1 04:44:15 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Mar  1 04:44:15 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0)
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/139826756' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Mar  1 04:44:15 np0005634532 objective_banzai[98253]: mimic
Mar  1 04:44:15 np0005634532 systemd[1]: libpod-54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e.scope: Deactivated successfully.
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.483839272 +0000 UTC m=+0.498156006 container died 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-03df33248cef3796afd1c950b0ccf403855cafc00ee24ddb84d2dec3bb8fe519-merged.mount: Deactivated successfully.
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:15 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:15 np0005634532 podman[98238]: 2026-03-01 09:44:15.529144927 +0000 UTC m=+0.543461701 container remove 54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e (image=quay.io/ceph/ceph:v19, name=objective_banzai, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 04:44:15 np0005634532 systemd[1]: libpod-conmon-54b9981057eabb42f4ab57e26f418df1e383833352320e035aade54e356efb7e.scope: Deactivated successfully.
Mar  1 04:44:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v19: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Mar  1 04:44:16 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Mar  1 04:44:16 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Mar  1 04:44:16 np0005634532 python3[98334]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.1.0.compute-2.dqiiuk-rgw
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: Bind address in nfs.cephfs.1.0.compute-2.dqiiuk's ganesha conf is defaulting to empty
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: Deploying daemon nfs.cephfs.1.0.compute-2.dqiiuk on compute-2
Mar  1 04:44:16 np0005634532 podman[98335]: 2026-03-01 09:44:16.533899325 +0000 UTC m=+0.054492675 container create 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:16 np0005634532 systemd[1]: Started libpod-conmon-6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002.scope.
Mar  1 04:44:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ca61bac6f4a735843d2c5be139e23c8272e0c5f6f6af7d11ea5f4f67cb5dec/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d2ca61bac6f4a735843d2c5be139e23c8272e0c5f6f6af7d11ea5f4f67cb5dec/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:16 np0005634532 podman[98335]: 2026-03-01 09:44:16.513255112 +0000 UTC m=+0.033848272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:16 np0005634532 podman[98335]: 2026-03-01 09:44:16.612120618 +0000 UTC m=+0.132713838 container init 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:44:16 np0005634532 podman[98335]: 2026-03-01 09:44:16.617481671 +0000 UTC m=+0.138074801 container start 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:44:16 np0005634532 podman[98335]: 2026-03-01 09:44:16.620532877 +0000 UTC m=+0.141126057 container attach 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:16 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw
Mar  1 04:44:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]} v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:16 np0005634532 ceph-mgr[76134]: [cephadm INFO root] Ensuring nfs.cephfs.2 is in the ganesha grace table
Mar  1 04:44:16 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Ensuring nfs.cephfs.2 is in the ganesha grace table
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]} v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions", "format": "json"} v 0)
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1853768717' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Mar  1 04:44:17 np0005634532 amazing_shamir[98351]: 
Mar  1 04:44:17 np0005634532 amazing_shamir[98351]: {"mon":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mgr":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"osd":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"mds":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"rgw":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":3},"overall":{"ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable)":15}}
Mar  1 04:44:17 np0005634532 systemd[1]: libpod-6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002.scope: Deactivated successfully.
Mar  1 04:44:17 np0005634532 podman[98335]: 2026-03-01 09:44:17.103978156 +0000 UTC m=+0.624571276 container died 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:44:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d2ca61bac6f4a735843d2c5be139e23c8272e0c5f6f6af7d11ea5f4f67cb5dec-merged.mount: Deactivated successfully.
Mar  1 04:44:17 np0005634532 podman[98335]: 2026-03-01 09:44:17.145531388 +0000 UTC m=+0.666124508 container remove 6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002 (image=quay.io/ceph/ceph:v19, name=amazing_shamir, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:44:17 np0005634532 systemd[1]: libpod-conmon-6b2e1f39f5edd2eb1f2ee157ee6e7d4372c6d2a15b0260bc8287ebd4aa2f5002.scope: Deactivated successfully.
Mar  1 04:44:17 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Mar  1 04:44:17 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]: dispatch
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw", "caps": ["mon", "allow r", "osd", "allow rw pool=.nfs namespace=cephfs"]}]': finished
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]: dispatch
Mar  1 04:44:17 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.mgr.nfs.grace.nfs.cephfs", "caps": ["mon", "allow r", "osd", "allow rwx pool .nfs"]}]': finished
Mar  1 04:44:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v20: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 1.9 KiB/s wr, 5 op/s
Mar  1 04:44:18 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.e scrub starts
Mar  1 04:44:18 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.e scrub ok
Mar  1 04:44:18 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw
Mar  1 04:44:18 np0005634532 ceph-mon[75825]: Ensuring nfs.cephfs.2 is in the ganesha grace table
Mar  1 04:44:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.f scrub starts
Mar  1 04:44:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.f scrub ok
Mar  1 04:44:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"} v 0)
Mar  1 04:44:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]} v 0)
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.nfs] Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw-rgw
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw-rgw
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: [cephadm WARNING cephadm.services.nfs] Bind address in nfs.cephfs.2.0.compute-0.ljexyw's ganesha conf is defaulting to empty
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [WRN] : Bind address in nfs.cephfs.2.0.compute-0.ljexyw's ganesha conf is defaulting to empty
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon nfs.cephfs.2.0.compute-0.ljexyw on compute-0
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon nfs.cephfs.2.0.compute-0.ljexyw on compute-0
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v21: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 2.6 KiB/s wr, 7 op/s
Mar  1 04:44:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.8 scrub starts
Mar  1 04:44:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.8 scrub ok
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.498028155 +0000 UTC m=+0.040364024 container create e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:44:20 np0005634532 systemd[1]: Started libpod-conmon-e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce.scope.
Mar  1 04:44:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.548904109 +0000 UTC m=+0.091240018 container init e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.553261787 +0000 UTC m=+0.095597666 container start e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:44:20 np0005634532 affectionate_chandrasekhar[98533]: 167 167
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.557025321 +0000 UTC m=+0.099361210 container attach e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:44:20 np0005634532 systemd[1]: libpod-e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce.scope: Deactivated successfully.
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.558990209 +0000 UTC m=+0.101326078 container died e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.481591307 +0000 UTC m=+0.023927226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e9e0a114e43f912b3a97bba3f0dc3614b3949b440859427de5a81c97ef464caa-merged.mount: Deactivated successfully.
Mar  1 04:44:20 np0005634532 podman[98517]: 2026-03-01 09:44:20.594179173 +0000 UTC m=+0.136515062 container remove e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_chandrasekhar, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:44:20 np0005634532 systemd[1]: libpod-conmon-e75ebc0c11199f07eef319f37bcc71b5cea0e24962d4fc20745d320458e179ce.scope: Deactivated successfully.
Mar  1 04:44:20 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:20 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:20 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]: dispatch
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth rm", "entity": "client.mgr.nfs.grace.nfs.cephfs"}]': finished
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: Rados config object exists: conf-nfs.cephfs
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: Creating key for client.nfs.cephfs.2.0.compute-0.ljexyw-rgw
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "auth get-or-create", "entity": "client.nfs.cephfs.2.0.compute-0.ljexyw-rgw", "caps": ["mon", "allow r", "osd", "allow rwx tag rgw *=*"]}]': finished
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: Bind address in nfs.cephfs.2.0.compute-0.ljexyw's ganesha conf is defaulting to empty
Mar  1 04:44:20 np0005634532 ceph-mon[75825]: Deploying daemon nfs.cephfs.2.0.compute-0.ljexyw on compute-0
Mar  1 04:44:20 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:20 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:20 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:21 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:21 np0005634532 podman[98694]: 2026-03-01 09:44:21.411701091 +0000 UTC m=+0.046182258 container create ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:44:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Mar  1 04:44:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Mar  1 04:44:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ba47b0fcac64a5f83edb16222d2fde1be569942a3302a74a1b384c737fed06/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ba47b0fcac64a5f83edb16222d2fde1be569942a3302a74a1b384c737fed06/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ba47b0fcac64a5f83edb16222d2fde1be569942a3302a74a1b384c737fed06/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66ba47b0fcac64a5f83edb16222d2fde1be569942a3302a74a1b384c737fed06/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:21 np0005634532 podman[98694]: 2026-03-01 09:44:21.480451249 +0000 UTC m=+0.114932446 container init ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:21 np0005634532 podman[98694]: 2026-03-01 09:44:21.484276274 +0000 UTC m=+0.118757441 container start ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:44:21 np0005634532 bash[98694]: ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0
Mar  1 04:44:21 np0005634532 podman[98694]: 2026-03-01 09:44:21.396404741 +0000 UTC m=+0.030885918 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:44:21 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:44:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:21 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 97066161-7796-40df-860c-a57d5412a9b4 (Updating nfs.cephfs deployment (+3 -> 3))
Mar  1 04:44:21 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 97066161-7796-40df-860c-a57d5412a9b4 (Updating nfs.cephfs deployment (+3 -> 3)) in 12 seconds
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:21 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev ea261173-e0d6-47fb-b30f-e374945e61fd (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/monitor_password}] v 0)
Mar  1 04:44:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:21 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-1.jhikly on compute-1
Mar  1 04:44:21 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-1.jhikly on compute-1
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v22: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1.4 KiB/s wr, 3 op/s
Mar  1 04:44:22 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.b scrub starts
Mar  1 04:44:22 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.b scrub ok
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 14 completed events
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:22 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000003:nfs.cephfs.2: -2
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:44:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:44:23 np0005634532 ceph-mon[75825]: Deploying daemon haproxy.nfs.cephfs.compute-1.jhikly on compute-1
Mar  1 04:44:23 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:23 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Mar  1 04:44:23 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Mar  1 04:44:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v23: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 2.4 KiB/s wr, 8 op/s
Mar  1 04:44:24 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Mar  1 04:44:24 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:25 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-0.wdbjdw on compute-0
Mar  1 04:44:25 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-0.wdbjdw on compute-0
Mar  1 04:44:26 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:26 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:26 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v24: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Mar  1 04:44:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:27 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4524000df0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:27 np0005634532 ceph-mon[75825]: Deploying daemon haproxy.nfs.cephfs.compute-0.wdbjdw on compute-0
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.33266762 +0000 UTC m=+2.009483126 container create bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.313581286 +0000 UTC m=+1.990396802 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Mar  1 04:44:28 np0005634532 systemd[1]: Started libpod-conmon-bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0.scope.
Mar  1 04:44:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v25: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Mar  1 04:44:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.401427448 +0000 UTC m=+2.078242994 container init bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.407266994 +0000 UTC m=+2.084082530 container start bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 interesting_agnesi[98981]: 0 0
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.41155355 +0000 UTC m=+2.088369056 container attach bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 systemd[1]: libpod-bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0.scope: Deactivated successfully.
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.412319519 +0000 UTC m=+2.089135065 container died bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6efb568dbf2e23a75f983df28da7d3c9b0f76a1f333403821aa703d2fa710bc1-merged.mount: Deactivated successfully.
Mar  1 04:44:28 np0005634532 podman[98857]: 2026-03-01 09:44:28.456767863 +0000 UTC m=+2.133583369 container remove bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0 (image=quay.io/ceph/haproxy:2.3, name=interesting_agnesi)
Mar  1 04:44:28 np0005634532 systemd[1]: libpod-conmon-bdc5ec5b55126c3585cc1a8cba8261c1846a2db2929a70b1bc491671eb5085d0.scope: Deactivated successfully.
Mar  1 04:44:28 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:28 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:28 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:28 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:28 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:28 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:29 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c001c00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:29 np0005634532 systemd[1]: Starting Ceph haproxy.nfs.cephfs.compute-0.wdbjdw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:29 np0005634532 podman[99141]: 2026-03-01 09:44:29.388907728 +0000 UTC m=+0.063604941 container create ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:44:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ed5fc1aeea0ff64fb98465b5c9afbd23ed8d4fb9cd83bf86acbed7fdf6cabd5/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:29 np0005634532 podman[99141]: 2026-03-01 09:44:29.364213995 +0000 UTC m=+0.038911248 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Mar  1 04:44:29 np0005634532 podman[99141]: 2026-03-01 09:44:29.459371429 +0000 UTC m=+0.134068652 container init ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:44:29 np0005634532 podman[99141]: 2026-03-01 09:44:29.465545132 +0000 UTC m=+0.140242345 container start ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:44:29 np0005634532 bash[99141]: ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544
Mar  1 04:44:29 np0005634532 systemd[1]: Started Ceph haproxy.nfs.cephfs.compute-0.wdbjdw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [NOTICE] 059/094429 (2) : New worker #1 (4) forked
Mar  1 04:44:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094429 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:29 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.nfs.cephfs.compute-2.cicoar on compute-2
Mar  1 04:44:29 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.nfs.cephfs.compute-2.cicoar on compute-2
Mar  1 04:44:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v26: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 5.2 KiB/s rd, 1.7 KiB/s wr, 6 op/s
Mar  1 04:44:30 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:30 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:30 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:30 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:31 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508000fa0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:31 np0005634532 ceph-mon[75825]: Deploying daemon haproxy.nfs.cephfs.compute-2.cicoar on compute-2
Mar  1 04:44:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v27: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1023 B/s wr, 4 op/s
Mar  1 04:44:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:32 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c000d00 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:33 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.nfs.cephfs/keepalived_password}] v 0)
Mar  1 04:44:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-1.hvfcuw on compute-1
Mar  1 04:44:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-1.hvfcuw on compute-1
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:34 np0005634532 ceph-mon[75825]: Deploying daemon keepalived.nfs.cephfs.compute-1.hvfcuw on compute-1
Mar  1 04:44:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v28: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 3.9 KiB/s rd, 1023 B/s wr, 5 op/s
Mar  1 04:44:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:35 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v29: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:44:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:37 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:44:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:44:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-2.yufpjd on compute-2
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-2.yufpjd on compute-2
Mar  1 04:44:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v30: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:44:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c001820 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:39 np0005634532 ceph-mon[75825]: Deploying daemon keepalived.nfs.cephfs.compute-2.yufpjd on compute-2
Mar  1 04:44:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:39 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45000016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:44:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v31: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:44:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508001ac0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:41 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.nfs.cephfs.compute-0.qbujzh on compute-0
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.nfs.cephfs.compute-0.qbujzh on compute-0
Mar  1 04:44:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v32: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:44:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:43 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-1 interface br-ex
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:44:43 np0005634532 ceph-mon[75825]: Deploying daemon keepalived.nfs.cephfs.compute-0.qbujzh on compute-0
Mar  1 04:44:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:43 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:44:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:43 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:44:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v33: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:44:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:44 np0005634532 podman[99264]: 2026-03-01 09:44:44.982118513 +0000 UTC m=+2.380536060 container create df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, distribution-scope=public, build-date=2023-02-22T09:23:20, architecture=x86_64, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git)
Mar  1 04:44:45 np0005634532 systemd[1]: Started libpod-conmon-df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d.scope.
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:44.965520385 +0000 UTC m=+2.363937922 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Mar  1 04:44:45 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:45 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0028c0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:45.04718765 +0000 UTC m=+2.445605177 container init df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, vendor=Red Hat, Inc., version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides keepalived on RHEL 9 for Ceph., release=1793, distribution-scope=public, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, description=keepalived for Ceph, vcs-type=git, name=keepalived, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, architecture=x86_64)
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:45.052264838 +0000 UTC m=+2.450682365 container start df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, release=1793, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=Ceph keepalived, io.buildah.version=1.28.2, description=keepalived for Ceph, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=)
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:45.056229738 +0000 UTC m=+2.454647285 container attach df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, release=1793, version=2.2.4, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Mar  1 04:44:45 np0005634532 romantic_dhawan[99364]: 0 0
Mar  1 04:44:45 np0005634532 systemd[1]: libpod-df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d.scope: Deactivated successfully.
Mar  1 04:44:45 np0005634532 conmon[99364]: conmon df560542df4745f3a358 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d.scope/container/memory.events
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:45.058789492 +0000 UTC m=+2.457207049 container died df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, io.k8s.display-name=Keepalived on RHEL 9, release=1793, vendor=Red Hat, Inc., io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=keepalived-container, distribution-scope=public, build-date=2023-02-22T09:23:20, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., name=keepalived, io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived)
Mar  1 04:44:45 np0005634532 systemd[1]: var-lib-containers-storage-overlay-97c7ad7171de952e793d776bba654b37e5438d2afee84dab852bf5555234dc42-merged.mount: Deactivated successfully.
Mar  1 04:44:45 np0005634532 podman[99264]: 2026-03-01 09:44:45.094616124 +0000 UTC m=+2.493033651 container remove df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d (image=quay.io/ceph/keepalived:2.2.4, name=romantic_dhawan, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, distribution-scope=public, io.buildah.version=1.28.2, release=1793, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Keepalived on RHEL 9, architecture=x86_64, description=keepalived for Ceph, vcs-type=git, version=2.2.4, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, name=keepalived)
Mar  1 04:44:45 np0005634532 systemd[1]: libpod-conmon-df560542df4745f3a358f83a847be071385f46adc5105efaed7f96ffb18bef6d.scope: Deactivated successfully.
Mar  1 04:44:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:45 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:45 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:45 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:45 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:45 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:45 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:45 np0005634532 systemd[1]: Starting Ceph keepalived.nfs.cephfs.compute-0.qbujzh for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:45 np0005634532 podman[99524]: 2026-03-01 09:44:45.935577992 +0000 UTC m=+0.036711075 container create 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, com.redhat.component=keepalived-container, release=1793, name=keepalived, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=2.2.4, io.buildah.version=1.28.2, description=keepalived for Ceph, architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Mar  1 04:44:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3602e0cc8a909a6541614142681cbd72587c41c732c605fd54f754e1f95848ce/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:45 np0005634532 podman[99524]: 2026-03-01 09:44:45.991590081 +0000 UTC m=+0.092723164 container init 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, vendor=Red Hat, Inc., io.buildah.version=1.28.2, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, release=1793, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git)
Mar  1 04:44:45 np0005634532 podman[99524]: 2026-03-01 09:44:45.995349726 +0000 UTC m=+0.096482789 container start 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, vendor=Red Hat, Inc., version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, build-date=2023-02-22T09:23:20, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, distribution-scope=public, vcs-type=git, architecture=x86_64, name=keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=keepalived-container, release=1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.expose-services=, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived)
Mar  1 04:44:45 np0005634532 bash[99524]: 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149
Mar  1 04:44:46 np0005634532 podman[99524]: 2026-03-01 09:44:45.918248545 +0000 UTC m=+0.019381658 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Mar  1 04:44:46 np0005634532 systemd[1]: Started Ceph keepalived.nfs.cephfs.compute-0.qbujzh for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Starting Keepalived v2.2.4 (08/21,2021)
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Running on Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 (built for Linux 5.14.0)
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Configuration file /etc/keepalived/keepalived.conf
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Starting VRRP child process, pid=4
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: Startup complete
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: (VI_0) Entering BACKUP STATE (init)
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:46 2026: VRRP_Script(check_backend) succeeded
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev ea261173-e0d6-47fb-b30f-e374945e61fd (Updating ingress.nfs.cephfs deployment (+6 -> 6))
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event ea261173-e0d6-47fb-b30f-e374945e61fd (Updating ingress.nfs.cephfs deployment (+6 -> 6)) in 24 seconds
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.nfs.cephfs}] v 0)
Mar  1 04:44:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev fe9d3f82-bea8-4d87-b98a-f500cbafba55 (Updating alertmanager deployment (+1 -> 1))
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon alertmanager.compute-0 on compute-0
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon alertmanager.compute-0 on compute-0
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:44:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v34: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:47 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: Deploying daemon alertmanager.compute-0 on compute-0
Mar  1 04:44:47 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 15 completed events
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:47 np0005634532 podman[99639]: 2026-03-01 09:44:47.962431498 +0000 UTC m=+1.387198667 volume create f36baad1e185274e40735232c1dcc9d5b22b99189650cb8f174a38b3f220424a
Mar  1 04:44:47 np0005634532 podman[99639]: 2026-03-01 09:44:47.973944268 +0000 UTC m=+1.398711437 container create 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:47 np0005634532 podman[99639]: 2026-03-01 09:44:47.947686767 +0000 UTC m=+1.372453976 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:44:48 np0005634532 systemd[1]: Started libpod-conmon-5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443.scope.
Mar  1 04:44:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/793e6c83b1f8d2ba4a814e144392883d677bc6099563ec9b048e7bdc02a84ea4/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.082866209 +0000 UTC m=+1.507633398 container init 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.089907887 +0000 UTC m=+1.514675076 container start 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 naughty_beaver[99774]: 65534 65534
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.093475246 +0000 UTC m=+1.518242445 container attach 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 systemd[1]: libpod-5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443.scope: Deactivated successfully.
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.09481735 +0000 UTC m=+1.519584549 container died 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-793e6c83b1f8d2ba4a814e144392883d677bc6099563ec9b048e7bdc02a84ea4-merged.mount: Deactivated successfully.
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.142435168 +0000 UTC m=+1.567202327 container remove 5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443 (image=quay.io/prometheus/alertmanager:v0.25.0, name=naughty_beaver, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 podman[99639]: 2026-03-01 09:44:48.147398453 +0000 UTC m=+1.572165602 volume remove f36baad1e185274e40735232c1dcc9d5b22b99189650cb8f174a38b3f220424a
Mar  1 04:44:48 np0005634532 systemd[1]: libpod-conmon-5ea65e576578ad133747f913d7928e66ae379d2a9f2be1bac84ecf5dda35e443.scope: Deactivated successfully.
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.212124712 +0000 UTC m=+0.043325222 volume create c65f26d4509974302e99f455eb2280fc58c657acc2fa61148e2d7b2bfdab6885
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.22396255 +0000 UTC m=+0.055163090 container create a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 systemd[1]: Started libpod-conmon-a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749.scope.
Mar  1 04:44:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5de23a18870cb0e5ed9301879b3b8000ec5639a9a9218a16ecb806494aa6849/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.197949095 +0000 UTC m=+0.029149645 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.302034165 +0000 UTC m=+0.133234675 container init a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.308714663 +0000 UTC m=+0.139915203 container start a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 gracious_merkle[99807]: 65534 65534
Mar  1 04:44:48 np0005634532 systemd[1]: libpod-a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749.scope: Deactivated successfully.
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.312379975 +0000 UTC m=+0.143580495 container attach a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.312589601 +0000 UTC m=+0.143790111 container died a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d5de23a18870cb0e5ed9301879b3b8000ec5639a9a9218a16ecb806494aa6849-merged.mount: Deactivated successfully.
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.352875415 +0000 UTC m=+0.184075955 container remove a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749 (image=quay.io/prometheus/alertmanager:v0.25.0, name=gracious_merkle, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:48 np0005634532 podman[99791]: 2026-03-01 09:44:48.358176338 +0000 UTC m=+0.189376868 volume remove c65f26d4509974302e99f455eb2280fc58c657acc2fa61148e2d7b2bfdab6885
Mar  1 04:44:48 np0005634532 systemd[1]: libpod-conmon-a2529fc85eee2cbc61362ba6e7a44a0bfae7f06bd8e18f819d14c9b4d589a749.scope: Deactivated successfully.
Mar  1 04:44:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v35: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:44:48 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:48 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:48 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:48 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:48 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:48 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508003340 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:48 np0005634532 systemd[1]: Starting Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:49 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:49 np0005634532 podman[99965]: 2026-03-01 09:44:49.084965832 +0000 UTC m=+0.019657116 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:44:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:44:49 2026: (VI_0) Entering MASTER STATE
Mar  1 04:44:50 np0005634532 podman[99965]: 2026-03-01 09:44:50.038040531 +0000 UTC m=+0.972731735 volume create 7cdffed30ca1aee034e76c0481a56cbd47d16ba7182d3c6adb2e13bd6ca648d7
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 podman[99965]: 2026-03-01 09:44:50.050650698 +0000 UTC m=+0.985342142 container create 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f4aa5d32b62ff6304caebfb4474293eea7bd9a80057d2bb3c785a664cdfaa6/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86f4aa5d32b62ff6304caebfb4474293eea7bd9a80057d2bb3c785a664cdfaa6/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:50 np0005634532 podman[99965]: 2026-03-01 09:44:50.120924487 +0000 UTC m=+1.055615701 container init 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:50 np0005634532 podman[99965]: 2026-03-01 09:44:50.125451311 +0000 UTC m=+1.060142515 container start 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:44:50 np0005634532 bash[99965]: 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479
Mar  1 04:44:50 np0005634532 systemd[1]: Started Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.163Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.163Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.173Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.175Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.219Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.219Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev fe9d3f82-bea8-4d87-b98a-f500cbafba55 (Updating alertmanager deployment (+1 -> 1))
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event fe9d3f82-bea8-4d87-b98a-f500cbafba55 (Updating alertmanager deployment (+1 -> 1)) in 4 seconds
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.226Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:50.226Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.alertmanager}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 8ca44d0c-401f-4afc-abe1-91483b14902e (Updating grafana deployment (+1 -> 1))
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.monitoring] Regenerating cephadm self-signed grafana TLS certificates
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Regenerating cephadm self-signed grafana TLS certificates
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.cert.grafana_cert}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cert_store.key.grafana_key}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v36: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"} v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_SSL_VERIFY}] v 0)
Mar  1 04:44:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon grafana.compute-0 on compute-0
Mar  1 04:44:50 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon grafana.compute-0 on compute-0
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:50 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:50 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:51 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500002b10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: Regenerating cephadm self-signed grafana TLS certificates
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard set-grafana-api-ssl-verify", "value": "false"}]: dispatch
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:51 np0005634532 ceph-mon[75825]: Deploying daemon grafana.compute-0 on compute-0
Mar  1 04:44:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094451 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:44:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:44:52.175Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.00001506s
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:44:52
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.control', 'default.rgw.log', 'volumes', '.nfs', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'vms', 'backups', 'cephfs.cephfs.data', 'images']
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v37: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 0.0 of space, bias 4.0, pg target 0.0 quantized to 32 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 1)
Mar  1 04:44:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:44:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:52 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:44:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:52 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:52 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 16 completed events
Mar  1 04:44:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:53 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Mar  1 04:44:53 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev eec2c686-d262-4b84-8d28-88e88b949505 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:44:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v39: 198 pgs: 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 102 B/s wr, 0 op/s
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Mar  1 04:44:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:54 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504000b60 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:54 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 34adb3a8-f3d1-4879-8d35-c6b84757a3dd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:54 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:54 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:55 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e50 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Mar  1 04:44:55 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev e4ed44bd-481a-4236-80da-78d3673aa257 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:55 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v42: 229 pgs: 31 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:56 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Mar  1 04:44:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:56 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Mar  1 04:44:56 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 876e188a-d172-4ddb-8f93-76ba245b11cc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"} v 0)
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:56 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 50 pg[8.0( v 34'9 (0'0,34'9] local-lis/les=33/34 n=6 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=13.036564827s) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 34'8 mlcod 34'8 active pruub 169.769332886s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:44:56 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[9.0( v 42'1010 (0'0,42'1010] local-lis/les=35/36 n=178 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52 pruub=15.053071022s) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 42'1009 mlcod 42'1009 active pruub 171.786148071s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:44:56 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.0( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50 pruub=13.036564827s) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 34'8 mlcod 0'0 unknown pruub 169.769332886s@ mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[9.0( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52 pruub=15.053071022s) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 42'1009 mlcod 0'0 unknown pruub 171.786148071s@ mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d02415fe28 space 0x55d02406ad10 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024130ac8 space 0x55d024005460 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d02417a0c8 space 0x55d02406b530 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d0241276a8 space 0x55d02406aaa0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d0241267a8 space 0x55d023f1da10 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d022d89388 space 0x55d0241a60e0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024149e28 space 0x55d023ff8010 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024127888 space 0x55d0240056d0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024127f68 space 0x55d024005390 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d02417ade8 space 0x55d024005940 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024130708 space 0x55d02406b390 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024149748 space 0x55d02406b050 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024127568 space 0x55d0240057a0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d023e94fc8 space 0x55d0241a7390 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024126fc8 space 0x55d024005600 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d023f4aac8 space 0x55d02406bae0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024149568 space 0x55d023ff80e0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d0241485c8 space 0x55d024005870 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024130b68 space 0x55d02406a4f0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d02419ba68 space 0x55d02406b2c0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d0241319c8 space 0x55d02406b6d0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024126988 space 0x55d024005530 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d023f057e8 space 0x55d0241a72c0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d02415f748 space 0x55d0241a71f0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024148988 space 0x55d023ff8c40 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024126168 space 0x55d023e931f0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024131b08 space 0x55d02406aeb0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d0241483e8 space 0x55d023ff8de0 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024168668 space 0x55d02406b940 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0).collection(9.0_head 0x55d025223200) operator()   moving buffer(0x55d024149b08 space 0x55d02406a690 0x0~1000 clean)
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.5( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.6( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.2( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.3( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1( v 34'9 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.9( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.7( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.8( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.4( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.14( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.15( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.16( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.17( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.18( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.19( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1b( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1d( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1c( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1e( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1f( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.a( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.b( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.c( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.d( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.e( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.f( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.10( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.11( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.12( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.13( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 52 pg[8.1a( v 34'9 lc 0'0 (0'0,34'9] local-lis/les=33/34 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:57 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.151337114 +0000 UTC m=+6.110816981 container create b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.127911274 +0000 UTC m=+6.087391181 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:44:57 np0005634532 systemd[1]: Started libpod-conmon-b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6.scope.
Mar  1 04:44:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.218755271 +0000 UTC m=+6.178235118 container init b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.225684315 +0000 UTC m=+6.185164182 container start b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.229190883 +0000 UTC m=+6.188670750 container attach b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 xenodochial_liskov[100315]: 472 0
Mar  1 04:44:57 np0005634532 systemd[1]: libpod-b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6.scope: Deactivated successfully.
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.230298711 +0000 UTC m=+6.189778558 container died b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-875331cb2aecb5453f94b69435c739a2270365e15beeddcb9a6b531f5e6c803b-merged.mount: Deactivated successfully.
Mar  1 04:44:57 np0005634532 podman[100093]: 2026-03-01 09:44:57.282817563 +0000 UTC m=+6.242297400 container remove b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6 (image=quay.io/ceph/grafana:10.4.0, name=xenodochial_liskov, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 systemd[1]: libpod-conmon-b437baacb0d6e6a21b831a1b4e347219bad3a41a3db69bfd30a44e61635071d6.scope: Deactivated successfully.
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.375227079 +0000 UTC m=+0.066822353 container create 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 systemd[1]: Started libpod-conmon-19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f.scope.
Mar  1 04:44:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.347116392 +0000 UTC m=+0.038711676 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.454296229 +0000 UTC m=+0.145891503 container init 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.461155682 +0000 UTC m=+0.152750946 container start 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.464384093 +0000 UTC m=+0.155979367 container attach 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 inspiring_greider[100348]: 472 0
Mar  1 04:44:57 np0005634532 systemd[1]: libpod-19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f.scope: Deactivated successfully.
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.465574903 +0000 UTC m=+0.157170167 container died 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1eed5a2bec8550ac9392c0416f29878a03b9601fef141448b2b609bc2ef9879f-merged.mount: Deactivated successfully.
Mar  1 04:44:57 np0005634532 podman[100332]: 2026-03-01 09:44:57.504331349 +0000 UTC m=+0.195926613 container remove 19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f (image=quay.io/ceph/grafana:10.4.0, name=inspiring_greider, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:57 np0005634532 systemd[1]: libpod-conmon-19256fabe63dc068ef5dcb496f999922161b79d711a6f0668bc934e285c80b7f.scope: Deactivated successfully.
Mar  1 04:44:57 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:57 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:57 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:57 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:57 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:57 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 9a1d9d22-ac55-47eb-95a6-de4b21ef3016 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev eec2c686-d262-4b84-8d28-88e88b949505 (PG autoscaler increasing pool 8 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event eec2c686-d262-4b84-8d28-88e88b949505 (PG autoscaler increasing pool 8 PGs from 1 to 32) in 5 seconds
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 34adb3a8-f3d1-4879-8d35-c6b84757a3dd (PG autoscaler increasing pool 9 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 34adb3a8-f3d1-4879-8d35-c6b84757a3dd (PG autoscaler increasing pool 9 PGs from 1 to 32) in 3 seconds
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev e4ed44bd-481a-4236-80da-78d3673aa257 (PG autoscaler increasing pool 10 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event e4ed44bd-481a-4236-80da-78d3673aa257 (PG autoscaler increasing pool 10 PGs from 1 to 32) in 2 seconds
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 876e188a-d172-4ddb-8f93-76ba245b11cc (PG autoscaler increasing pool 11 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 876e188a-d172-4ddb-8f93-76ba245b11cc (PG autoscaler increasing pool 11 PGs from 1 to 32) in 1 seconds
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 9a1d9d22-ac55-47eb-95a6-de4b21ef3016 (PG autoscaler increasing pool 12 PGs from 1 to 32)
Mar  1 04:44:57 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 9a1d9d22-ac55-47eb-95a6-de4b21ef3016 (PG autoscaler increasing pool 12 PGs from 1 to 32) in 0 seconds
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.15( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.14( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.16( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.11( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.10( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.17( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.3( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.2( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.e( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.9( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.8( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.b( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.f( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.c( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.d( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.a( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.6( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.7( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.4( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.5( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1a( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1b( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.18( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.19( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1e( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1f( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1c( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1d( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.12( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.13( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=35/36 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.14( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.15( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.17( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.16( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.10( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.14( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.11( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.2( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.3( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.2( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.9( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.8( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.a( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.e( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.d( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.c( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.0( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 34'8 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.0( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 42'1009 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.7( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.5( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.4( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.5( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.4( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1a( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.19( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.18( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.6( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1e( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1d( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.1c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1c( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.13( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[8.12( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=33/33 les/c/f=34/34/0 sis=50) [0] r=0 lpr=50 pi=[33,50)/1 crt=34'9 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 53 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=35/35 les/c/f=36/36/0 sis=52) [0] r=0 lpr=52 pi=[35,52)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]: dispatch
Mar  1 04:44:57 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num", "val": "32"}]': finished
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 21 completed events
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 python3[100435]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [progress WARNING root] Starting Global Recovery Event,93 pgs not in active + clean state
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.094601346 +0000 UTC m=+0.055887018 container create 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:44:58 np0005634532 systemd[1]: Starting Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:58 np0005634532 systemd[1]: Started libpod-conmon-4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04.scope.
Mar  1 04:44:58 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828347524d462f679895b0c9828ad8bb3d9ccbe9ccd5479299642b0101ba4291/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/828347524d462f679895b0c9828ad8bb3d9ccbe9ccd5479299642b0101ba4291/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.158827583 +0000 UTC m=+0.120113295 container init 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.16866451 +0000 UTC m=+0.129950192 container start 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.075455894 +0000 UTC m=+0.036741586 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.17261984 +0000 UTC m=+0.133905522 container attach 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:44:58 np0005634532 competent_hodgkin[100501]: could not fetch user info: no user info saved
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v45: 291 pgs: 93 unknown, 198 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:58 np0005634532 podman[100622]: 2026-03-01 09:44:58.387607851 +0000 UTC m=+0.059800176 container create b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:58 np0005634532 systemd[1]: libpod-4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04.scope: Deactivated successfully.
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.404205589 +0000 UTC m=+0.365491281 container died 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:44:58 np0005634532 systemd[1]: var-lib-containers-storage-overlay-828347524d462f679895b0c9828ad8bb3d9ccbe9ccd5479299642b0101ba4291-merged.mount: Deactivated successfully.
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 podman[100622]: 2026-03-01 09:44:58.357317579 +0000 UTC m=+0.029509984 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:44:58 np0005634532 podman[100481]: 2026-03-01 09:44:58.457571122 +0000 UTC m=+0.418856834 container remove 4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04 (image=quay.io/ceph/ceph:v19, name=competent_hodgkin, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 04:44:58 np0005634532 podman[100622]: 2026-03-01 09:44:58.47019777 +0000 UTC m=+0.142390165 container init b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:58 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:58 np0005634532 podman[100622]: 2026-03-01 09:44:58.476786856 +0000 UTC m=+0.148979191 container start b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:44:58 np0005634532 bash[100622]: b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6
Mar  1 04:44:58 np0005634532 systemd[1]: Started Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:44:58 np0005634532 systemd[1]: libpod-conmon-4d4b2b46a8120d767914df3185a9c1ccce1c05fa960714d7006361c8f3ddaf04.scope: Deactivated successfully.
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:44:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Mar  1 04:44:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 8ca44d0c-401f-4afc-abe1-91483b14902e (Updating grafana deployment (+1 -> 1))
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 8ca44d0c-401f-4afc-abe1-91483b14902e (Updating grafana deployment (+1 -> 1)) in 8 seconds
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.grafana}] v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 664d5842-c7f8-4a7a-aadf-0426bc25da5d (Updating ingress.rgw.default deployment (+4 -> 4))
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/monitor_password}] v 0)
Mar  1 04:44:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-0.hyuwxv on compute-0
Mar  1 04:44:58 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-0.hyuwxv on compute-0
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731541468Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-01T09:44:58Z
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731825855Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731832575Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731837066Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731840446Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731843526Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731846606Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731849736Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731853136Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731856826Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731859866Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731862986Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731866096Z level=info msg=Target target=[all]
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731873346Z level=info msg="Path Home" path=/usr/share/grafana
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731876377Z level=info msg="Path Data" path=/var/lib/grafana
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731879357Z level=info msg="Path Logs" path=/var/log/grafana
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731882377Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731885377Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=settings t=2026-03-01T09:44:58.731888327Z level=info msg="App mode production"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore t=2026-03-01T09:44:58.732316588Z level=info msg="Connecting to DB" dbtype=sqlite3
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore t=2026-03-01T09:44:58.732329348Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.733047186Z level=info msg="Starting DB migrations"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.733934648Z level=info msg="Executing migration" id="create migration_log table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.734763389Z level=info msg="Migration successfully executed" id="create migration_log table" duration=828.441µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.738212216Z level=info msg="Executing migration" id="create user table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.738747799Z level=info msg="Migration successfully executed" id="create user table" duration=535.583µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.740794901Z level=info msg="Executing migration" id="add unique index user.login"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.741294164Z level=info msg="Migration successfully executed" id="add unique index user.login" duration=499.243µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.743972091Z level=info msg="Executing migration" id="add unique index user.email"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.744487094Z level=info msg="Migration successfully executed" id="add unique index user.email" duration=516.313µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.746069104Z level=info msg="Executing migration" id="drop index UQE_user_login - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.746566286Z level=info msg="Migration successfully executed" id="drop index UQE_user_login - v1" duration=497.222µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.748252849Z level=info msg="Executing migration" id="drop index UQE_user_email - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.748720691Z level=info msg="Migration successfully executed" id="drop index UQE_user_email - v1" duration=439.641µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.750348842Z level=info msg="Executing migration" id="Rename table user to user_v1 - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.751938121Z level=info msg="Migration successfully executed" id="Rename table user to user_v1 - v1" duration=1.58974ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.753986363Z level=info msg="Executing migration" id="create user table v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.754528617Z level=info msg="Migration successfully executed" id="create user table v2" duration=542.244µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.756475606Z level=info msg="Executing migration" id="create index UQE_user_login - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.756946328Z level=info msg="Migration successfully executed" id="create index UQE_user_login - v2" duration=470.682µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.760368094Z level=info msg="Executing migration" id="create index UQE_user_email - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.760843546Z level=info msg="Migration successfully executed" id="create index UQE_user_email - v2" duration=475.252µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.762753264Z level=info msg="Executing migration" id="copy data_source v1 to v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.76302111Z level=info msg="Migration successfully executed" id="copy data_source v1 to v2" duration=267.766µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.764385145Z level=info msg="Executing migration" id="Drop old table user_v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.764725673Z level=info msg="Migration successfully executed" id="Drop old table user_v1" duration=340.508µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.766102378Z level=info msg="Executing migration" id="Add column help_flags1 to user table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.766821786Z level=info msg="Migration successfully executed" id="Add column help_flags1 to user table" duration=719.018µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.768277273Z level=info msg="Executing migration" id="Update user table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.768305143Z level=info msg="Migration successfully executed" id="Update user table charset" duration=29.03µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.77057075Z level=info msg="Executing migration" id="Add last_seen_at column to user"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.771938545Z level=info msg="Migration successfully executed" id="Add last_seen_at column to user" duration=1.365595ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.773683269Z level=info msg="Executing migration" id="Add missing user data"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.773952606Z level=info msg="Migration successfully executed" id="Add missing user data" duration=268.846µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.775969856Z level=info msg="Executing migration" id="Add is_disabled column to user"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.777455524Z level=info msg="Migration successfully executed" id="Add is_disabled column to user" duration=1.485358ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.779425793Z level=info msg="Executing migration" id="Add index user.login/user.email"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.780363987Z level=info msg="Migration successfully executed" id="Add index user.login/user.email" duration=937.374µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.781951837Z level=info msg="Executing migration" id="Add is_service_account column to user"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.783443605Z level=info msg="Migration successfully executed" id="Add is_service_account column to user" duration=1.490987ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.785207879Z level=info msg="Executing migration" id="Update is_service_account column to nullable"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.796498513Z level=info msg="Migration successfully executed" id="Update is_service_account column to nullable" duration=11.288844ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.798511354Z level=info msg="Executing migration" id="Add uid column to user"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.79995935Z level=info msg="Migration successfully executed" id="Add uid column to user" duration=1.447766ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.801840558Z level=info msg="Executing migration" id="Update uid column values for users"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.802106694Z level=info msg="Migration successfully executed" id="Update uid column values for users" duration=265.606µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.803950101Z level=info msg="Executing migration" id="Add unique index user_uid"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.804878104Z level=info msg="Migration successfully executed" id="Add unique index user_uid" duration=929.363µs
Mar  1 04:44:58 np0005634532 python3[100699]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v19 --fsid 437b1e74-f995-5d64-af1d-257ce01d77ab -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.807015628Z level=info msg="Executing migration" id="create temp user table v1-7"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.808064024Z level=info msg="Migration successfully executed" id="create temp user table v1-7" duration=1.047826ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.810421524Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v1-7"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.81106921Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v1-7" duration=647.206µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.812762402Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v1-7"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.813381978Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v1-7" duration=620.846µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.814895636Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v1-7"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.815511072Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v1-7" duration=615.276µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.817206134Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v1-7"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.81783191Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v1-7" duration=625.616µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.819496392Z level=info msg="Executing migration" id="Update temp_user table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.819523103Z level=info msg="Migration successfully executed" id="Update temp_user table charset" duration=27.621µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.821053031Z level=info msg="Executing migration" id="drop index IDX_temp_user_email - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.821686027Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_email - v1" duration=632.896µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.823134994Z level=info msg="Executing migration" id="drop index IDX_temp_user_org_id - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.823744829Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_org_id - v1" duration=607.115µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.825307948Z level=info msg="Executing migration" id="drop index IDX_temp_user_code - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.825921984Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_code - v1" duration=613.656µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.827597956Z level=info msg="Executing migration" id="drop index IDX_temp_user_status - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.828194091Z level=info msg="Migration successfully executed" id="drop index IDX_temp_user_status - v1" duration=595.695µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.829817582Z level=info msg="Executing migration" id="Rename table temp_user to temp_user_tmp_qwerty - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.83292302Z level=info msg="Migration successfully executed" id="Rename table temp_user to temp_user_tmp_qwerty - v1" duration=3.103078ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.835147066Z level=info msg="Executing migration" id="create temp_user v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:58 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.835887095Z level=info msg="Migration successfully executed" id="create temp_user v2" duration=739.938µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.838283334Z level=info msg="Executing migration" id="create index IDX_temp_user_email - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.8389128Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_email - v2" duration=629.316µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.840667244Z level=info msg="Executing migration" id="create index IDX_temp_user_org_id - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.84132857Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_org_id - v2" duration=660.816µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.843342651Z level=info msg="Executing migration" id="create index IDX_temp_user_code - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.844026328Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_code - v2" duration=680.887µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.845787833Z level=info msg="Executing migration" id="create index IDX_temp_user_status - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.846452849Z level=info msg="Migration successfully executed" id="create index IDX_temp_user_status - v2" duration=664.476µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.849834945Z level=info msg="Executing migration" id="copy temp_user v1 to v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.850289156Z level=info msg="Migration successfully executed" id="copy temp_user v1 to v2" duration=456.872µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.851890446Z level=info msg="Executing migration" id="drop temp_user_tmp_qwerty"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.852324797Z level=info msg="Migration successfully executed" id="drop temp_user_tmp_qwerty" duration=432.211µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.853775564Z level=info msg="Executing migration" id="Set created for temp users that will otherwise prematurely expire"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.854053281Z level=info msg="Migration successfully executed" id="Set created for temp users that will otherwise prematurely expire" duration=277.197µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.855916128Z level=info msg="Executing migration" id="create star table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.856534913Z level=info msg="Migration successfully executed" id="create star table" duration=618.115µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.860303458Z level=info msg="Executing migration" id="add unique index star.user_id_dashboard_id"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.861227221Z level=info msg="Migration successfully executed" id="add unique index star.user_id_dashboard_id" duration=922.343µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.863211621Z level=info msg="Executing migration" id="create org table v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.864316419Z level=info msg="Migration successfully executed" id="create org table v1" duration=1.103708ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.866284499Z level=info msg="Executing migration" id="create index UQE_org_name - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.86713622Z level=info msg="Migration successfully executed" id="create index UQE_org_name - v1" duration=852.051µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.869145701Z level=info msg="Executing migration" id="create org_user table v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.86992267Z level=info msg="Migration successfully executed" id="create org_user table v1" duration=774.569µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.871866109Z level=info msg="Executing migration" id="create index IDX_org_user_org_id - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.872848264Z level=info msg="Migration successfully executed" id="create index IDX_org_user_org_id - v1" duration=987.415µs
Mar  1 04:44:58 np0005634532 podman[100750]: 2026-03-01 09:44:58.874453914 +0000 UTC m=+0.050705656 container create 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.874967677Z level=info msg="Executing migration" id="create index UQE_org_user_org_id_user_id - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.875758237Z level=info msg="Migration successfully executed" id="create index UQE_org_user_org_id_user_id - v1" duration=790.45µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.877430769Z level=info msg="Executing migration" id="create index IDX_org_user_user_id - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.878423044Z level=info msg="Migration successfully executed" id="create index IDX_org_user_user_id - v1" duration=991.925µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.88025324Z level=info msg="Executing migration" id="Update org table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.880271011Z level=info msg="Migration successfully executed" id="Update org table charset" duration=17.131µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.882101297Z level=info msg="Executing migration" id="Update org_user table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.882130917Z level=info msg="Migration successfully executed" id="Update org_user table charset" duration=30.58µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.884064556Z level=info msg="Executing migration" id="Migrate all Read Only Viewers to Viewers"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.884266791Z level=info msg="Migration successfully executed" id="Migrate all Read Only Viewers to Viewers" duration=202.125µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.886959569Z level=info msg="Executing migration" id="create dashboard table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.887921573Z level=info msg="Migration successfully executed" id="create dashboard table" duration=961.714µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.889692048Z level=info msg="Executing migration" id="add index dashboard.account_id"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.89098173Z level=info msg="Migration successfully executed" id="add index dashboard.account_id" duration=1.289812ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.892762755Z level=info msg="Executing migration" id="add unique index dashboard_account_id_slug"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.893625757Z level=info msg="Migration successfully executed" id="add unique index dashboard_account_id_slug" duration=863.002µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.895305679Z level=info msg="Executing migration" id="create dashboard_tag table"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.895984376Z level=info msg="Migration successfully executed" id="create dashboard_tag table" duration=678.977µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.897761491Z level=info msg="Executing migration" id="add unique index dashboard_tag.dasboard_id_term"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.898634633Z level=info msg="Migration successfully executed" id="add unique index dashboard_tag.dasboard_id_term" duration=872.352µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.900791567Z level=info msg="Executing migration" id="drop index UQE_dashboard_tag_dashboard_id_term - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.901841224Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_tag_dashboard_id_term - v1" duration=1.051236ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.903789583Z level=info msg="Executing migration" id="Rename table dashboard to dashboard_v1 - v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.910513542Z level=info msg="Migration successfully executed" id="Rename table dashboard to dashboard_v1 - v1" duration=6.724189ms
Mar  1 04:44:58 np0005634532 systemd[1]: Started libpod-conmon-549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222.scope.
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.912563303Z level=info msg="Executing migration" id="create dashboard v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.913778764Z level=info msg="Migration successfully executed" id="create dashboard v2" duration=1.214411ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.915705523Z level=info msg="Executing migration" id="create index IDX_dashboard_org_id - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.916459612Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_org_id - v2" duration=753.879µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.920100763Z level=info msg="Executing migration" id="create index UQE_dashboard_org_id_slug - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.921067017Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_org_id_slug - v2" duration=966.164µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.922675628Z level=info msg="Executing migration" id="copy dashboard v1 to v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.923083368Z level=info msg="Migration successfully executed" id="copy dashboard v1 to v2" duration=408.28µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.9247263Z level=info msg="Executing migration" id="drop table dashboard_v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.926075184Z level=info msg="Migration successfully executed" id="drop table dashboard_v1" duration=1.346113ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.927895129Z level=info msg="Executing migration" id="alter dashboard.data to mediumtext v1"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.928025483Z level=info msg="Migration successfully executed" id="alter dashboard.data to mediumtext v1" duration=130.254µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.930180587Z level=info msg="Executing migration" id="Add column updated_by in dashboard - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.932088655Z level=info msg="Migration successfully executed" id="Add column updated_by in dashboard - v2" duration=1.904328ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.934303131Z level=info msg="Executing migration" id="Add column created_by in dashboard - v2"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.936521897Z level=info msg="Migration successfully executed" id="Add column created_by in dashboard - v2" duration=2.218345ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.938726282Z level=info msg="Executing migration" id="Add column gnetId in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.940478956Z level=info msg="Migration successfully executed" id="Add column gnetId in dashboard" duration=1.752084ms
Mar  1 04:44:58 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.942470526Z level=info msg="Executing migration" id="Add index for gnetId in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.943297547Z level=info msg="Migration successfully executed" id="Add index for gnetId in dashboard" duration=826.181µs
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc797d01f335dec3332561252956382f280d9fbb13606e44c98e06b1669ab88/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fdc797d01f335dec3332561252956382f280d9fbb13606e44c98e06b1669ab88/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:44:58 np0005634532 podman[100750]: 2026-03-01 09:44:58.85443632 +0000 UTC m=+0.030688062 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.955501444Z level=info msg="Executing migration" id="Add column plugin_id in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.95731913Z level=info msg="Migration successfully executed" id="Add column plugin_id in dashboard" duration=1.819326ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.959488235Z level=info msg="Executing migration" id="Add index for plugin_id in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.960441279Z level=info msg="Migration successfully executed" id="Add index for plugin_id in dashboard" duration=953.854µs
Mar  1 04:44:58 np0005634532 podman[100750]: 2026-03-01 09:44:58.961932426 +0000 UTC m=+0.138184228 container init 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.962197693Z level=info msg="Executing migration" id="Add index for dashboard_id in dashboard_tag"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.963150447Z level=info msg="Migration successfully executed" id="Add index for dashboard_id in dashboard_tag" duration=950.764µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.96528006Z level=info msg="Executing migration" id="Update dashboard table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.965345412Z level=info msg="Migration successfully executed" id="Update dashboard table charset" duration=66.982µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.967163498Z level=info msg="Executing migration" id="Update dashboard_tag table charset"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.967195659Z level=info msg="Migration successfully executed" id="Update dashboard_tag table charset" duration=33.601µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.969022665Z level=info msg="Executing migration" id="Add column folder_id in dashboard"
Mar  1 04:44:58 np0005634532 podman[100750]: 2026-03-01 09:44:58.970138303 +0000 UTC m=+0.146390055 container start 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.971262771Z level=info msg="Migration successfully executed" id="Add column folder_id in dashboard" duration=2.239766ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.973804525Z level=info msg="Executing migration" id="Add column isFolder in dashboard"
Mar  1 04:44:58 np0005634532 podman[100750]: 2026-03-01 09:44:58.974129853 +0000 UTC m=+0.150381605 container attach 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.97637576Z level=info msg="Migration successfully executed" id="Add column isFolder in dashboard" duration=2.570875ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.978930784Z level=info msg="Executing migration" id="Add column has_acl in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.981600981Z level=info msg="Migration successfully executed" id="Add column has_acl in dashboard" duration=2.670207ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.984197477Z level=info msg="Executing migration" id="Add column uid in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.986692609Z level=info msg="Migration successfully executed" id="Add column uid in dashboard" duration=2.494093ms
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.988975817Z level=info msg="Executing migration" id="Update uid column values in dashboard"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.989248974Z level=info msg="Migration successfully executed" id="Update uid column values in dashboard" duration=273.447µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.991166362Z level=info msg="Executing migration" id="Add unique index dashboard_org_id_uid"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.992124436Z level=info msg="Migration successfully executed" id="Add unique index dashboard_org_id_uid" duration=960.684µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.993897161Z level=info msg="Executing migration" id="Remove unique index org_id_slug"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.994729532Z level=info msg="Migration successfully executed" id="Remove unique index org_id_slug" duration=826.41µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.996699581Z level=info msg="Executing migration" id="Update dashboard title length"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.996740512Z level=info msg="Migration successfully executed" id="Update dashboard title length" duration=42.621µs
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.998567818Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_title_folder_id"
Mar  1 04:44:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:58.999576224Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_title_folder_id" duration=1.008036ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.001747368Z level=info msg="Executing migration" id="create dashboard_provisioning"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.00260827Z level=info msg="Migration successfully executed" id="create dashboard_provisioning" duration=857.232µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.00460472Z level=info msg="Executing migration" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.01054867Z level=info msg="Migration successfully executed" id="Rename table dashboard_provisioning to dashboard_provisioning_tmp_qwerty - v1" duration=5.93561ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.013141265Z level=info msg="Executing migration" id="create dashboard_provisioning v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.01411584Z level=info msg="Migration successfully executed" id="create dashboard_provisioning v2" duration=972.574µs
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.016151131Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.016850328Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id - v2" duration=699.167µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.018559701Z level=info msg="Executing migration" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.019244459Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_provisioning_dashboard_id_name - v2" duration=684.148µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.022025589Z level=info msg="Executing migration" id="copy dashboard_provisioning v1 to v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.022256474Z level=info msg="Migration successfully executed" id="copy dashboard_provisioning v1 to v2" duration=231.275µs
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.024202323Z level=info msg="Executing migration" id="drop dashboard_provisioning_tmp_qwerty"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.024666015Z level=info msg="Migration successfully executed" id="drop dashboard_provisioning_tmp_qwerty" duration=464.042µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.026302636Z level=info msg="Executing migration" id="Add check_sum column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.028160903Z level=info msg="Migration successfully executed" id="Add check_sum column" duration=1.857807ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.030192974Z level=info msg="Executing migration" id="Add index for dashboard_title"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.030857151Z level=info msg="Migration successfully executed" id="Add index for dashboard_title" duration=664.787µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.032525663Z level=info msg="Executing migration" id="delete tags for deleted dashboards"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.032649296Z level=info msg="Migration successfully executed" id="delete tags for deleted dashboards" duration=124.093µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.034312778Z level=info msg="Executing migration" id="delete stars for deleted dashboards"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.034428581Z level=info msg="Migration successfully executed" id="delete stars for deleted dashboards" duration=115.993µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.036410691Z level=info msg="Executing migration" id="Add index for dashboard_is_folder"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.036982415Z level=info msg="Migration successfully executed" id="Add index for dashboard_is_folder" duration=571.864µs
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.039495448Z level=info msg="Executing migration" id="Add isPublic for dashboard"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.04114842Z level=info msg="Migration successfully executed" id="Add isPublic for dashboard" duration=1.653222ms
Mar  1 04:44:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.043585331Z level=info msg="Executing migration" id="create data_source table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.044293289Z level=info msg="Migration successfully executed" id="create data_source table" duration=708.068µs
Mar  1 04:44:59 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 54 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=9.031906128s) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active pruub 167.829910278s@ mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:44:59 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 54 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54 pruub=9.031906128s) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown pruub 167.829910278s@ mbc={}] state<Start>: transitioning to Primary
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:44:59 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.060980079Z level=info msg="Executing migration" id="add index data_source.account_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.062023945Z level=info msg="Migration successfully executed" id="add index data_source.account_id" duration=1.046796ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.065181855Z level=info msg="Executing migration" id="add unique index data_source.account_id_name"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.066064267Z level=info msg="Migration successfully executed" id="add unique index data_source.account_id_name" duration=884.902µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.069231637Z level=info msg="Executing migration" id="drop index IDX_data_source_account_id - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.070081008Z level=info msg="Migration successfully executed" id="drop index IDX_data_source_account_id - v1" duration=851.801µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.075983227Z level=info msg="Executing migration" id="drop index UQE_data_source_account_id_name - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.076878049Z level=info msg="Migration successfully executed" id="drop index UQE_data_source_account_id_name - v1" duration=898.932µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.08604342Z level=info msg="Executing migration" id="Rename table data_source to data_source_v1 - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.094248897Z level=info msg="Migration successfully executed" id="Rename table data_source to data_source_v1 - v1" duration=8.205236ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.096647217Z level=info msg="Executing migration" id="create data_source table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.098109254Z level=info msg="Migration successfully executed" id="create data_source table v2" duration=1.478838ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.100473493Z level=info msg="Executing migration" id="create index IDX_data_source_org_id - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.101486849Z level=info msg="Migration successfully executed" id="create index IDX_data_source_org_id - v2" duration=1.013086ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.10350692Z level=info msg="Executing migration" id="create index UQE_data_source_org_id_name - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.104436533Z level=info msg="Migration successfully executed" id="create index UQE_data_source_org_id_name - v2" duration=929.533µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.106264659Z level=info msg="Executing migration" id="Drop old table data_source_v1 #2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.106844574Z level=info msg="Migration successfully executed" id="Drop old table data_source_v1 #2" duration=578.474µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.108614268Z level=info msg="Executing migration" id="Add column with_credentials"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.110987248Z level=info msg="Migration successfully executed" id="Add column with_credentials" duration=2.37241ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.113019329Z level=info msg="Executing migration" id="Add secure json data column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.115494561Z level=info msg="Migration successfully executed" id="Add secure json data column" duration=2.496503ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.117564563Z level=info msg="Executing migration" id="Update data_source table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.117594134Z level=info msg="Migration successfully executed" id="Update data_source table charset" duration=30.561µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.119329298Z level=info msg="Executing migration" id="Update initial version to 1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.119494022Z level=info msg="Migration successfully executed" id="Update initial version to 1" duration=165.064µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.121263547Z level=info msg="Executing migration" id="Add read_only data column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.122936059Z level=info msg="Migration successfully executed" id="Add read_only data column" duration=1.669073ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.124831996Z level=info msg="Executing migration" id="Migrate logging ds to loki ds"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.125011731Z level=info msg="Migration successfully executed" id="Migrate logging ds to loki ds" duration=179.745µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.126756595Z level=info msg="Executing migration" id="Update json_data with nulls"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.126907059Z level=info msg="Migration successfully executed" id="Update json_data with nulls" duration=150.444µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.128640372Z level=info msg="Executing migration" id="Add uid column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.130360496Z level=info msg="Migration successfully executed" id="Add uid column" duration=1.719824ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.132016847Z level=info msg="Executing migration" id="Update uid value"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.132186371Z level=info msg="Migration successfully executed" id="Update uid value" duration=169.714µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.133844653Z level=info msg="Executing migration" id="Add unique index datasource_org_id_uid"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.13452083Z level=info msg="Migration successfully executed" id="Add unique index datasource_org_id_uid" duration=675.727µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.136414178Z level=info msg="Executing migration" id="add unique index datasource_org_id_is_default"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.137118016Z level=info msg="Migration successfully executed" id="add unique index datasource_org_id_is_default" duration=703.368µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.138833209Z level=info msg="Executing migration" id="create api_key table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.139574317Z level=info msg="Migration successfully executed" id="create api_key table" duration=740.748µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.141570648Z level=info msg="Executing migration" id="add index api_key.account_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.142210534Z level=info msg="Migration successfully executed" id="add index api_key.account_id" duration=640.096µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.146266606Z level=info msg="Executing migration" id="add index api_key.key"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.147187299Z level=info msg="Migration successfully executed" id="add index api_key.key" duration=921.103µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.148837611Z level=info msg="Executing migration" id="add index api_key.account_id_name"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.149578009Z level=info msg="Migration successfully executed" id="add index api_key.account_id_name" duration=740.258µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.151225611Z level=info msg="Executing migration" id="drop index IDX_api_key_account_id - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.151950619Z level=info msg="Migration successfully executed" id="drop index IDX_api_key_account_id - v1" duration=724.768µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.153514398Z level=info msg="Executing migration" id="drop index UQE_api_key_key - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.154197625Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_key - v1" duration=680.387µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.156223096Z level=info msg="Executing migration" id="drop index UQE_api_key_account_id_name - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.156937955Z level=info msg="Migration successfully executed" id="drop index UQE_api_key_account_id_name - v1" duration=714.718µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.15915151Z level=info msg="Executing migration" id="Rename table api_key to api_key_v1 - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.163740886Z level=info msg="Migration successfully executed" id="Rename table api_key to api_key_v1 - v1" duration=4.589366ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.165546861Z level=info msg="Executing migration" id="create api_key table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.166518036Z level=info msg="Migration successfully executed" id="create api_key table v2" duration=971.025µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.168227629Z level=info msg="Executing migration" id="create index IDX_api_key_org_id - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.168861585Z level=info msg="Migration successfully executed" id="create index IDX_api_key_org_id - v2" duration=634.386µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.173487071Z level=info msg="Executing migration" id="create index UQE_api_key_key - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.17426009Z level=info msg="Migration successfully executed" id="create index UQE_api_key_key - v2" duration=774.059µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.175893352Z level=info msg="Executing migration" id="create index UQE_api_key_org_id_name - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.176678491Z level=info msg="Migration successfully executed" id="create index UQE_api_key_org_id_name - v2" duration=784.58µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.178161819Z level=info msg="Executing migration" id="copy api_key v1 to v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.178475197Z level=info msg="Migration successfully executed" id="copy api_key v1 to v2" duration=313.618µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.179908583Z level=info msg="Executing migration" id="Drop old table api_key_v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.180443586Z level=info msg="Migration successfully executed" id="Drop old table api_key_v1" duration=532.733µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.181945124Z level=info msg="Executing migration" id="Update api_key table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.182065567Z level=info msg="Migration successfully executed" id="Update api_key table charset" duration=125.413µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.183660237Z level=info msg="Executing migration" id="Add expires to api_key table"
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.183590395 +0000 UTC m=+0.048390769 container create 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.185439872Z level=info msg="Migration successfully executed" id="Add expires to api_key table" duration=1.777355ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.18816319Z level=info msg="Executing migration" id="Add service account foreign key"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.189799132Z level=info msg="Migration successfully executed" id="Add service account foreign key" duration=1.635462ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.19171899Z level=info msg="Executing migration" id="set service account foreign key to nil if 0"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.191847153Z level=info msg="Migration successfully executed" id="set service account foreign key to nil if 0" duration=112.253µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.195872774Z level=info msg="Executing migration" id="Add last_used_at to api_key table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.19767798Z level=info msg="Migration successfully executed" id="Add last_used_at to api_key table" duration=1.805216ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.199143747Z level=info msg="Executing migration" id="Add is_revoked column to api_key table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.200898591Z level=info msg="Migration successfully executed" id="Add is_revoked column to api_key table" duration=1.750984ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.202259765Z level=info msg="Executing migration" id="create dashboard_snapshot table v4"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.202927012Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v4" duration=667.227µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.204469161Z level=info msg="Executing migration" id="drop table dashboard_snapshot_v4 #1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.204984284Z level=info msg="Migration successfully executed" id="drop table dashboard_snapshot_v4 #1" duration=514.843µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.206775459Z level=info msg="Executing migration" id="create dashboard_snapshot table v5 #2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.207473486Z level=info msg="Migration successfully executed" id="create dashboard_snapshot table v5 #2" duration=698.387µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.209084987Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_key - v5"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.209736163Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_key - v5" duration=650.936µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.211137469Z level=info msg="Executing migration" id="create index UQE_dashboard_snapshot_delete_key - v5"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.211781465Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_snapshot_delete_key - v5" duration=646.216µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.213242142Z level=info msg="Executing migration" id="create index IDX_dashboard_snapshot_user_id - v5"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.213843537Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_snapshot_user_id - v5" duration=601.685µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.215615291Z level=info msg="Executing migration" id="alter dashboard_snapshot to mediumtext v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.215674723Z level=info msg="Migration successfully executed" id="alter dashboard_snapshot to mediumtext v2" duration=59.472µs
Mar  1 04:44:59 np0005634532 systemd[1]: Started libpod-conmon-30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5.scope.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.217801986Z level=info msg="Executing migration" id="Update dashboard_snapshot table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.217828757Z level=info msg="Migration successfully executed" id="Update dashboard_snapshot table charset" duration=30.251µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.219193451Z level=info msg="Executing migration" id="Add column external_delete_url to dashboard_snapshots table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.221629013Z level=info msg="Migration successfully executed" id="Add column external_delete_url to dashboard_snapshots table" duration=2.434692ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.223054989Z level=info msg="Executing migration" id="Add encrypted dashboard json column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.224993507Z level=info msg="Migration successfully executed" id="Add encrypted dashboard json column" duration=1.938198ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.226429924Z level=info msg="Executing migration" id="Change dashboard_encrypted column to MEDIUMBLOB"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.226945007Z level=info msg="Migration successfully executed" id="Change dashboard_encrypted column to MEDIUMBLOB" duration=515.593µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.228632419Z level=info msg="Executing migration" id="create quota table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.229263455Z level=info msg="Migration successfully executed" id="create quota table v1" duration=630.316µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.232458505Z level=info msg="Executing migration" id="create index UQE_quota_org_id_user_id_target - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.233147773Z level=info msg="Migration successfully executed" id="create index UQE_quota_org_id_user_id_target - v1" duration=688.958µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.234565248Z level=info msg="Executing migration" id="Update quota table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.234580599Z level=info msg="Migration successfully executed" id="Update quota table charset" duration=15.901µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.236132388Z level=info msg="Executing migration" id="create plugin_setting table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.236679002Z level=info msg="Migration successfully executed" id="create plugin_setting table" duration=547.074µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.238045186Z level=info msg="Executing migration" id="create index UQE_plugin_setting_org_id_plugin_id - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.238663112Z level=info msg="Migration successfully executed" id="create index UQE_plugin_setting_org_id_plugin_id - v1" duration=616.425µs
Mar  1 04:44:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.240352674Z level=info msg="Executing migration" id="Add column plugin_version to plugin_settings"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.242400016Z level=info msg="Migration successfully executed" id="Add column plugin_version to plugin_settings" duration=2.047052ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.243887153Z level=info msg="Executing migration" id="Update plugin_setting table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.243909834Z level=info msg="Migration successfully executed" id="Update plugin_setting table charset" duration=25.091µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.245556795Z level=info msg="Executing migration" id="create session table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.246189651Z level=info msg="Migration successfully executed" id="create session table" duration=633.316µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.247773531Z level=info msg="Executing migration" id="Drop old table playlist table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.248076498Z level=info msg="Migration successfully executed" id="Drop old table playlist table" duration=303.127µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.249786172Z level=info msg="Executing migration" id="Drop old table playlist_item table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.249861003Z level=info msg="Migration successfully executed" id="Drop old table playlist_item table" duration=74.871µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.251533706Z level=info msg="Executing migration" id="create playlist table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.252204532Z level=info msg="Migration successfully executed" id="create playlist table v2" duration=670.657µs
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.159324605 +0000 UTC m=+0.024125029 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.25411459 +0000 UTC m=+0.118915024 container init 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.255550077Z level=info msg="Executing migration" id="create playlist item table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.256676455Z level=info msg="Migration successfully executed" id="create playlist item table v2" duration=1.126159ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.2584713Z level=info msg="Executing migration" id="Update playlist table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.258501691Z level=info msg="Migration successfully executed" id="Update playlist table charset" duration=28.891µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.260397899Z level=info msg="Executing migration" id="Update playlist_item table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.260420769Z level=info msg="Migration successfully executed" id="Update playlist_item table charset" duration=22.44µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.262087681Z level=info msg="Executing migration" id="Add playlist column created_at"
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.262399659 +0000 UTC m=+0.127200033 container start 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.264202544Z level=info msg="Migration successfully executed" id="Add playlist column created_at" duration=2.114473ms
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.265588959 +0000 UTC m=+0.130389333 container attach 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.265681402Z level=info msg="Executing migration" id="Add playlist column updated_at"
Mar  1 04:44:59 np0005634532 gallant_brahmagupta[100910]: 0 0
Mar  1 04:44:59 np0005634532 systemd[1]: libpod-30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5.scope: Deactivated successfully.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.267765554Z level=info msg="Migration successfully executed" id="Add playlist column updated_at" duration=2.085992ms
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.268489832 +0000 UTC m=+0.133290206 container died 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.269265672Z level=info msg="Executing migration" id="drop preferences table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.269342214Z level=info msg="Migration successfully executed" id="drop preferences table v2" duration=76.682µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.271198571Z level=info msg="Executing migration" id="drop preferences table v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.271268762Z level=info msg="Migration successfully executed" id="drop preferences table v3" duration=70.502µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.27277029Z level=info msg="Executing migration" id="create preferences table v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.273384945Z level=info msg="Migration successfully executed" id="create preferences table v3" duration=614.685µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.274984746Z level=info msg="Executing migration" id="Update preferences table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.275029767Z level=info msg="Migration successfully executed" id="Update preferences table charset" duration=46.701µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.277478809Z level=info msg="Executing migration" id="Add column team_id in preferences"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.279688874Z level=info msg="Migration successfully executed" id="Add column team_id in preferences" duration=2.209745ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.281416758Z level=info msg="Executing migration" id="Update team_id column values in preferences"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.281541171Z level=info msg="Migration successfully executed" id="Update team_id column values in preferences" duration=124.483µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.282981417Z level=info msg="Executing migration" id="Add column week_start in preferences"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.286036554Z level=info msg="Migration successfully executed" id="Add column week_start in preferences" duration=3.050977ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.287684755Z level=info msg="Executing migration" id="Add column preferences.json_data"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.290503516Z level=info msg="Migration successfully executed" id="Add column preferences.json_data" duration=2.816931ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.292212289Z level=info msg="Executing migration" id="alter preferences.json_data to mediumtext v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.292265661Z level=info msg="Migration successfully executed" id="alter preferences.json_data to mediumtext v1" duration=53.512µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.294081266Z level=info msg="Executing migration" id="Add preferences index org_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.294763074Z level=info msg="Migration successfully executed" id="Add preferences index org_id" duration=682.228µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.296519198Z level=info msg="Executing migration" id="Add preferences index user_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.297282327Z level=info msg="Migration successfully executed" id="Add preferences index user_id" duration=762.959µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.299166284Z level=info msg="Executing migration" id="create alert table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.300249562Z level=info msg="Migration successfully executed" id="create alert table v1" duration=1.082598ms
Mar  1 04:44:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-050581baaa14496809222e9e2926192570c717a95147ce5a9f25379154ef0270-merged.mount: Deactivated successfully.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.302424756Z level=info msg="Executing migration" id="add index alert org_id & id "
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.30335931Z level=info msg="Migration successfully executed" id="add index alert org_id & id " duration=934.254µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.30533525Z level=info msg="Executing migration" id="add index alert state"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.305929005Z level=info msg="Migration successfully executed" id="add index alert state" duration=593.245µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.308692054Z level=info msg="Executing migration" id="add index alert dashboard_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.30931033Z level=info msg="Migration successfully executed" id="add index alert dashboard_id" duration=616.076µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.311240128Z level=info msg="Executing migration" id="Create alert_rule_tag table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.31172091Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v1" duration=478.332µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.313103735Z level=info msg="Executing migration" id="Add unique index alert_rule_tag.alert_id_tag_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.313742711Z level=info msg="Migration successfully executed" id="Add unique index alert_rule_tag.alert_id_tag_id" duration=638.466µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.315310111Z level=info msg="Executing migration" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.315923726Z level=info msg="Migration successfully executed" id="drop index UQE_alert_rule_tag_alert_id_tag_id - v1" duration=613.365µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.317569108Z level=info msg="Executing migration" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1"
Mar  1 04:44:59 np0005634532 podman[100886]: 2026-03-01 09:44:59.319418304 +0000 UTC m=+0.184218688 container remove 30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5 (image=quay.io/ceph/haproxy:2.3, name=gallant_brahmagupta)
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.325332973Z level=info msg="Migration successfully executed" id="Rename table alert_rule_tag to alert_rule_tag_v1 - v1" duration=7.763245ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.327020626Z level=info msg="Executing migration" id="Create alert_rule_tag table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.327571959Z level=info msg="Migration successfully executed" id="Create alert_rule_tag table v2" duration=552.044µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.329229411Z level=info msg="Executing migration" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.329879908Z level=info msg="Migration successfully executed" id="create index UQE_alert_rule_tag_alert_id_tag_id - Add unique index alert_rule_tag.alert_id_tag_id V2" duration=648.766µs
Mar  1 04:44:59 np0005634532 systemd[1]: libpod-conmon-30f74851a32f0512e8f0ab424c1b365cacccdafdd925cabb5889c619a6065bc5.scope: Deactivated successfully.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.332983236Z level=info msg="Executing migration" id="copy alert_rule_tag v1 to v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.333257683Z level=info msg="Migration successfully executed" id="copy alert_rule_tag v1 to v2" duration=274.616µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.335372846Z level=info msg="Executing migration" id="drop table alert_rule_tag_v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.335831907Z level=info msg="Migration successfully executed" id="drop table alert_rule_tag_v1" duration=459.561µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.337799517Z level=info msg="Executing migration" id="create alert_notification table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.338392002Z level=info msg="Migration successfully executed" id="create alert_notification table v1" duration=592.275µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.340600167Z level=info msg="Executing migration" id="Add column is_default"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.343033009Z level=info msg="Migration successfully executed" id="Add column is_default" duration=2.432841ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.34506286Z level=info msg="Executing migration" id="Add column frequency"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.347603024Z level=info msg="Migration successfully executed" id="Add column frequency" duration=2.539874ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.349415199Z level=info msg="Executing migration" id="Add column send_reminder"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.352447766Z level=info msg="Migration successfully executed" id="Add column send_reminder" duration=3.032567ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.354080637Z level=info msg="Executing migration" id="Add column disable_resolve_message"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.356745364Z level=info msg="Migration successfully executed" id="Add column disable_resolve_message" duration=2.664547ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.358639811Z level=info msg="Executing migration" id="add index alert_notification org_id & name"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.359291918Z level=info msg="Migration successfully executed" id="add index alert_notification org_id & name" duration=652.497µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.36138205Z level=info msg="Executing migration" id="Update alert table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.361431432Z level=info msg="Migration successfully executed" id="Update alert table charset" duration=50.122µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.363129824Z level=info msg="Executing migration" id="Update alert_notification table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.363179616Z level=info msg="Migration successfully executed" id="Update alert_notification table charset" duration=50.382µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.364988011Z level=info msg="Executing migration" id="create notification_journal table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.365622067Z level=info msg="Migration successfully executed" id="create notification_journal table v1" duration=633.346µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.369105095Z level=info msg="Executing migration" id="add index notification_journal org_id & alert_id & notifier_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.36972609Z level=info msg="Migration successfully executed" id="add index notification_journal org_id & alert_id & notifier_id" duration=620.875µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.371261399Z level=info msg="Executing migration" id="drop alert_notification_journal"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.371872374Z level=info msg="Migration successfully executed" id="drop alert_notification_journal" duration=610.845µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.373534246Z level=info msg="Executing migration" id="create alert_notification_state table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.374116061Z level=info msg="Migration successfully executed" id="create alert_notification_state table v1" duration=581.745µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.375923916Z level=info msg="Executing migration" id="add index alert_notification_state org_id & alert_id & notifier_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.376561352Z level=info msg="Migration successfully executed" id="add index alert_notification_state org_id & alert_id & notifier_id" duration=637.466µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.378080031Z level=info msg="Executing migration" id="Add for to alert table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.380978274Z level=info msg="Migration successfully executed" id="Add for to alert table" duration=2.896363ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.38281251Z level=info msg="Executing migration" id="Add column uid in alert_notification"
Mar  1 04:44:59 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.385913068Z level=info msg="Migration successfully executed" id="Add column uid in alert_notification" duration=3.096638ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.387815896Z level=info msg="Executing migration" id="Update uid column values in alert_notification"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.388071412Z level=info msg="Migration successfully executed" id="Update uid column values in alert_notification" duration=255.696µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.389614081Z level=info msg="Executing migration" id="Add unique index alert_notification_org_id_uid"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.390263727Z level=info msg="Migration successfully executed" id="Add unique index alert_notification_org_id_uid" duration=649.586µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.391832717Z level=info msg="Executing migration" id="Remove unique index org_id_name"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.392478003Z level=info msg="Migration successfully executed" id="Remove unique index org_id_name" duration=645.226µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.39433223Z level=info msg="Executing migration" id="Add column secure_settings in alert_notification"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.396933395Z level=info msg="Migration successfully executed" id="Add column secure_settings in alert_notification" duration=2.601355ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.398447283Z level=info msg="Executing migration" id="alter alert.settings to mediumtext"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.398528375Z level=info msg="Migration successfully executed" id="alter alert.settings to mediumtext" duration=81.652µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.399923221Z level=info msg="Executing migration" id="Add non-unique index alert_notification_state_alert_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.400557656Z level=info msg="Migration successfully executed" id="Add non-unique index alert_notification_state_alert_id" duration=634.126µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.402104675Z level=info msg="Executing migration" id="Add non-unique index alert_rule_tag_alert_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.402785213Z level=info msg="Migration successfully executed" id="Add non-unique index alert_rule_tag_alert_id" duration=682.717µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.404485305Z level=info msg="Executing migration" id="Drop old annotation table v4"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.404608648Z level=info msg="Migration successfully executed" id="Drop old annotation table v4" duration=124.063µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.40624815Z level=info msg="Executing migration" id="create annotation table v5"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.406914336Z level=info msg="Migration successfully executed" id="create annotation table v5" duration=666.246µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.408560838Z level=info msg="Executing migration" id="add index annotation 0 v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.409213004Z level=info msg="Migration successfully executed" id="add index annotation 0 v3" duration=652.056µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.410983249Z level=info msg="Executing migration" id="add index annotation 1 v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.411608805Z level=info msg="Migration successfully executed" id="add index annotation 1 v3" duration=625.496µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.413155734Z level=info msg="Executing migration" id="add index annotation 2 v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.413765009Z level=info msg="Migration successfully executed" id="add index annotation 2 v3" duration=609.175µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.415643106Z level=info msg="Executing migration" id="add index annotation 3 v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.416398815Z level=info msg="Migration successfully executed" id="add index annotation 3 v3" duration=757.039µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.41859029Z level=info msg="Executing migration" id="add index annotation 4 v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.419295528Z level=info msg="Migration successfully executed" id="add index annotation 4 v3" duration=705.098µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.420899038Z level=info msg="Executing migration" id="Update annotation table charset"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.42094764Z level=info msg="Migration successfully executed" id="Update annotation table charset" duration=49.042µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.422507079Z level=info msg="Executing migration" id="Add column region_id to annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.425424832Z level=info msg="Migration successfully executed" id="Add column region_id to annotation table" duration=2.918893ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.4269063Z level=info msg="Executing migration" id="Drop category_id index"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.427540626Z level=info msg="Migration successfully executed" id="Drop category_id index" duration=634.496µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.429370432Z level=info msg="Executing migration" id="Add column tags to annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.432207643Z level=info msg="Migration successfully executed" id="Add column tags to annotation table" duration=2.836571ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.433915616Z level=info msg="Executing migration" id="Create annotation_tag table v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.434445959Z level=info msg="Migration successfully executed" id="Create annotation_tag table v2" duration=530.243µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.436257235Z level=info msg="Executing migration" id="Add unique index annotation_tag.annotation_id_tag_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.436903191Z level=info msg="Migration successfully executed" id="Add unique index annotation_tag.annotation_id_tag_id" duration=645.806µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.438515732Z level=info msg="Executing migration" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.439205829Z level=info msg="Migration successfully executed" id="drop index UQE_annotation_tag_annotation_id_tag_id - v2" duration=689.827µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.441044656Z level=info msg="Executing migration" id="Rename table annotation_tag to annotation_tag_v2 - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.448821441Z level=info msg="Migration successfully executed" id="Rename table annotation_tag to annotation_tag_v2 - v2" duration=7.778196ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.450388351Z level=info msg="Executing migration" id="Create annotation_tag table v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.450913424Z level=info msg="Migration successfully executed" id="Create annotation_tag table v3" duration=525.003µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.452666588Z level=info msg="Executing migration" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.453347285Z level=info msg="Migration successfully executed" id="create index UQE_annotation_tag_annotation_id_tag_id - Add unique index annotation_tag.annotation_id_tag_id V3" duration=680.477µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.455018047Z level=info msg="Executing migration" id="copy annotation_tag v2 to v3"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.455259683Z level=info msg="Migration successfully executed" id="copy annotation_tag v2 to v3" duration=241.576µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.456827963Z level=info msg="Executing migration" id="drop table annotation_tag_v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.457274734Z level=info msg="Migration successfully executed" id="drop table annotation_tag_v2" duration=447.011µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.458845834Z level=info msg="Executing migration" id="Update alert annotations and set TEXT to empty"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.459021568Z level=info msg="Migration successfully executed" id="Update alert annotations and set TEXT to empty" duration=173.594µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.460984277Z level=info msg="Executing migration" id="Add created time to annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.467507042Z level=info msg="Migration successfully executed" id="Add created time to annotation table" duration=6.512544ms
Mar  1 04:44:59 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.471287507Z level=info msg="Executing migration" id="Add updated time to annotation table"
Mar  1 04:44:59 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.477249027Z level=info msg="Migration successfully executed" id="Add updated time to annotation table" duration=5.95895ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.480270993Z level=info msg="Executing migration" id="Add index for created in annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.481482413Z level=info msg="Migration successfully executed" id="Add index for created in annotation table" duration=1.21191ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.483477044Z level=info msg="Executing migration" id="Add index for updated in annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.48453967Z level=info msg="Migration successfully executed" id="Add index for updated in annotation table" duration=1.060536ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.486241743Z level=info msg="Executing migration" id="Convert existing annotations from seconds to milliseconds"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.48652677Z level=info msg="Migration successfully executed" id="Convert existing annotations from seconds to milliseconds" duration=288.527µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.488257804Z level=info msg="Executing migration" id="Add epoch_end column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.492362267Z level=info msg="Migration successfully executed" id="Add epoch_end column" duration=4.104483ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.494132622Z level=info msg="Executing migration" id="Add index for epoch_end"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.495098956Z level=info msg="Migration successfully executed" id="Add index for epoch_end" duration=965.974µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.496767408Z level=info msg="Executing migration" id="Make epoch_end the same as epoch"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.496952613Z level=info msg="Migration successfully executed" id="Make epoch_end the same as epoch" duration=185.445µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.498561373Z level=info msg="Executing migration" id="Move region to single row"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.498981804Z level=info msg="Migration successfully executed" id="Move region to single row" duration=420.241µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.500896552Z level=info msg="Executing migration" id="Remove index org_id_epoch from annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.501830226Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch from annotation table" duration=933.493µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.503500928Z level=info msg="Executing migration" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.504428631Z level=info msg="Migration successfully executed" id="Remove index org_id_dashboard_id_panel_id_epoch from annotation table" duration=927.833µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.506101703Z level=info msg="Executing migration" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.507067267Z level=info msg="Migration successfully executed" id="Add index for org_id_dashboard_id_epoch_end_epoch on annotation table" duration=965.164µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.508817011Z level=info msg="Executing migration" id="Add index for org_id_epoch_end_epoch on annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.509768775Z level=info msg="Migration successfully executed" id="Add index for org_id_epoch_end_epoch on annotation table" duration=951.294µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.511418687Z level=info msg="Executing migration" id="Remove index org_id_epoch_epoch_end from annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.512284379Z level=info msg="Migration successfully executed" id="Remove index org_id_epoch_epoch_end from annotation table" duration=865.492µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.514060203Z level=info msg="Executing migration" id="Add index for alert_id on annotation table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.514893044Z level=info msg="Migration successfully executed" id="Add index for alert_id on annotation table" duration=832.601µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.516804713Z level=info msg="Executing migration" id="Increase tags column to length 4096"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.516866904Z level=info msg="Migration successfully executed" id="Increase tags column to length 4096" duration=62.952µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.520555997Z level=info msg="Executing migration" id="create test_data table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.521408058Z level=info msg="Migration successfully executed" id="create test_data table" duration=849.751µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.526442065Z level=info msg="Executing migration" id="create dashboard_version table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.527272436Z level=info msg="Migration successfully executed" id="create dashboard_version table v1" duration=831.061µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.528971949Z level=info msg="Executing migration" id="add index dashboard_version.dashboard_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.52983144Z level=info msg="Migration successfully executed" id="add index dashboard_version.dashboard_id" duration=858.961µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.532062847Z level=info msg="Executing migration" id="add unique index dashboard_version.dashboard_id and dashboard_version.version"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.532945079Z level=info msg="Migration successfully executed" id="add unique index dashboard_version.dashboard_id and dashboard_version.version" duration=882.062µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.53457154Z level=info msg="Executing migration" id="Set dashboard version to 1 where 0"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.534744874Z level=info msg="Migration successfully executed" id="Set dashboard version to 1 where 0" duration=171.904µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.536450307Z level=info msg="Executing migration" id="save existing dashboard data in dashboard_version table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.536797896Z level=info msg="Migration successfully executed" id="save existing dashboard data in dashboard_version table v1" duration=347.659µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.538230492Z level=info msg="Executing migration" id="alter dashboard_version.data to mediumtext v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.538291113Z level=info msg="Migration successfully executed" id="alter dashboard_version.data to mediumtext v1" duration=62.681µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.540070048Z level=info msg="Executing migration" id="create team table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.540810727Z level=info msg="Migration successfully executed" id="create team table" duration=740.249µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.542782306Z level=info msg="Executing migration" id="add index team.org_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.543808052Z level=info msg="Migration successfully executed" id="add index team.org_id" duration=1.025326ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.545505065Z level=info msg="Executing migration" id="add unique index team_org_id_name"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.546425648Z level=info msg="Migration successfully executed" id="add unique index team_org_id_name" duration=920.153µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.54810597Z level=info msg="Executing migration" id="Add column uid in team"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.552422849Z level=info msg="Migration successfully executed" id="Add column uid in team" duration=4.316379ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.554015339Z level=info msg="Executing migration" id="Update uid column values in team"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.554184093Z level=info msg="Migration successfully executed" id="Update uid column values in team" duration=169.074µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.555773633Z level=info msg="Executing migration" id="Add unique index team_org_id_uid"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.556664846Z level=info msg="Migration successfully executed" id="Add unique index team_org_id_uid" duration=890.873µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.558379559Z level=info msg="Executing migration" id="create team member table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.559158919Z level=info msg="Migration successfully executed" id="create team member table" duration=779.069µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.560697737Z level=info msg="Executing migration" id="add index team_member.org_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.56160756Z level=info msg="Migration successfully executed" id="add index team_member.org_id" duration=909.523µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.563045926Z level=info msg="Executing migration" id="add unique index team_member_org_id_team_id_user_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.563946629Z level=info msg="Migration successfully executed" id="add unique index team_member_org_id_team_id_user_id" duration=900.283µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.565732404Z level=info msg="Executing migration" id="add index team_member.team_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.566608156Z level=info msg="Migration successfully executed" id="add index team_member.team_id" duration=875.662µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.568415211Z level=info msg="Executing migration" id="Add column email to team table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.573150731Z level=info msg="Migration successfully executed" id="Add column email to team table" duration=4.73518ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.574887474Z level=info msg="Executing migration" id="Add column external to team_member table"
Mar  1 04:44:59 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.15 scrub starts
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.579645514Z level=info msg="Migration successfully executed" id="Add column external to team_member table" duration=4.75726ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.582107016Z level=info msg="Executing migration" id="Add column permission to team_member table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.586781244Z level=info msg="Migration successfully executed" id="Add column permission to team_member table" duration=4.673388ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.588583489Z level=info msg="Executing migration" id="create dashboard acl table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.589730438Z level=info msg="Migration successfully executed" id="create dashboard acl table" duration=1.146339ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.591511013Z level=info msg="Executing migration" id="add index dashboard_acl_dashboard_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.592463227Z level=info msg="Migration successfully executed" id="add index dashboard_acl_dashboard_id" duration=952.264µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.594117098Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_user_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.595243117Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_user_id" duration=1.122899ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.598729075Z level=info msg="Executing migration" id="add unique index dashboard_acl_dashboard_id_team_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.599636737Z level=info msg="Migration successfully executed" id="add unique index dashboard_acl_dashboard_id_team_id" duration=907.552µs
Mar  1 04:44:59 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.15 scrub ok
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.601774541Z level=info msg="Executing migration" id="add index dashboard_acl_user_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.602647173Z level=info msg="Migration successfully executed" id="add index dashboard_acl_user_id" duration=873.092µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.604144911Z level=info msg="Executing migration" id="add index dashboard_acl_team_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.605078594Z level=info msg="Migration successfully executed" id="add index dashboard_acl_team_id" duration=933.253µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.606843479Z level=info msg="Executing migration" id="add index dashboard_acl_org_id_role"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.607776562Z level=info msg="Migration successfully executed" id="add index dashboard_acl_org_id_role" duration=932.593µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.609441684Z level=info msg="Executing migration" id="add index dashboard_permission"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.610348727Z level=info msg="Migration successfully executed" id="add index dashboard_permission" duration=906.843µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.611800714Z level=info msg="Executing migration" id="save default acl rules in dashboard_acl table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.612306796Z level=info msg="Migration successfully executed" id="save default acl rules in dashboard_acl table" duration=505.282µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.614348428Z level=info msg="Executing migration" id="delete acl rules for deleted dashboards and folders"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.614598104Z level=info msg="Migration successfully executed" id="delete acl rules for deleted dashboards and folders" duration=249.816µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.616323177Z level=info msg="Executing migration" id="create tag table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.61721634Z level=info msg="Migration successfully executed" id="create tag table" duration=892.663µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.6187916Z level=info msg="Executing migration" id="add index tag.key_value"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.619665102Z level=info msg="Migration successfully executed" id="add index tag.key_value" duration=875.532µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.62117689Z level=info msg="Executing migration" id="create login attempt table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.621956079Z level=info msg="Migration successfully executed" id="create login attempt table" duration=778.63µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.623461987Z level=info msg="Executing migration" id="add index login_attempt.username"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.624434692Z level=info msg="Migration successfully executed" id="add index login_attempt.username" duration=972.075µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.626211026Z level=info msg="Executing migration" id="drop index IDX_login_attempt_username - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.627077388Z level=info msg="Migration successfully executed" id="drop index IDX_login_attempt_username - v1" duration=865.562µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.629039647Z level=info msg="Executing migration" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.641162423Z level=info msg="Migration successfully executed" id="Rename table login_attempt to login_attempt_tmp_qwerty - v1" duration=12.116895ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.643582114Z level=info msg="Executing migration" id="create login_attempt v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.644215099Z level=info msg="Migration successfully executed" id="create login_attempt v2" duration=629.746µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.646017295Z level=info msg="Executing migration" id="create index IDX_login_attempt_username - v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.646837085Z level=info msg="Migration successfully executed" id="create index IDX_login_attempt_username - v2" duration=820.64µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.648571029Z level=info msg="Executing migration" id="copy login_attempt v1 to v2"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.648778704Z level=info msg="Migration successfully executed" id="copy login_attempt v1 to v2" duration=211.745µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.650102498Z level=info msg="Executing migration" id="drop login_attempt_tmp_qwerty"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.650521858Z level=info msg="Migration successfully executed" id="drop login_attempt_tmp_qwerty" duration=419.26µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.652071267Z level=info msg="Executing migration" id="create user auth table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.652739244Z level=info msg="Migration successfully executed" id="create user auth table" duration=668.597µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.654254482Z level=info msg="Executing migration" id="create index IDX_user_auth_auth_module_auth_id - v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.654882838Z level=info msg="Migration successfully executed" id="create index IDX_user_auth_auth_module_auth_id - v1" duration=628.136µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.656725814Z level=info msg="Executing migration" id="alter user_auth.auth_id to length 190"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.656769125Z level=info msg="Migration successfully executed" id="alter user_auth.auth_id to length 190" duration=43.661µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.658255693Z level=info msg="Executing migration" id="Add OAuth access token to user_auth"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.661685949Z level=info msg="Migration successfully executed" id="Add OAuth access token to user_auth" duration=3.429596ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.663452414Z level=info msg="Executing migration" id="Add OAuth refresh token to user_auth"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.666716606Z level=info msg="Migration successfully executed" id="Add OAuth refresh token to user_auth" duration=3.263852ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.668169772Z level=info msg="Executing migration" id="Add OAuth token type to user_auth"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.671426244Z level=info msg="Migration successfully executed" id="Add OAuth token type to user_auth" duration=3.256352ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.672873381Z level=info msg="Executing migration" id="Add OAuth expiry to user_auth"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.676193784Z level=info msg="Migration successfully executed" id="Add OAuth expiry to user_auth" duration=3.320113ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.677685902Z level=info msg="Executing migration" id="Add index to user_id column in user_auth"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.678310938Z level=info msg="Migration successfully executed" id="Add index to user_id column in user_auth" duration=622.216µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.68000897Z level=info msg="Executing migration" id="Add OAuth ID token to user_auth"
Mar  1 04:44:59 np0005634532 systemd[1]: Reloading.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.683340384Z level=info msg="Migration successfully executed" id="Add OAuth ID token to user_auth" duration=3.346774ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.684905404Z level=info msg="Executing migration" id="create server_lock table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.685459708Z level=info msg="Migration successfully executed" id="create server_lock table" duration=554.134µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.688374661Z level=info msg="Executing migration" id="add index server_lock.operation_uid"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.688986096Z level=info msg="Migration successfully executed" id="add index server_lock.operation_uid" duration=611.035µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.692420923Z level=info msg="Executing migration" id="create user auth token table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.692987907Z level=info msg="Migration successfully executed" id="create user auth token table" duration=566.744µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.694373882Z level=info msg="Executing migration" id="add unique index user_auth_token.auth_token"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.694986567Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.auth_token" duration=612.715µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.696362562Z level=info msg="Executing migration" id="add unique index user_auth_token.prev_auth_token"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.696978037Z level=info msg="Migration successfully executed" id="add unique index user_auth_token.prev_auth_token" duration=615.355µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.698356242Z level=info msg="Executing migration" id="add index user_auth_token.user_id"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.699042199Z level=info msg="Migration successfully executed" id="add index user_auth_token.user_id" duration=685.667µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.700516467Z level=info msg="Executing migration" id="Add revoked_at to the user auth token"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.704057486Z level=info msg="Migration successfully executed" id="Add revoked_at to the user auth token" duration=3.538709ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.705683557Z level=info msg="Executing migration" id="add index user_auth_token.revoked_at"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.706337633Z level=info msg="Migration successfully executed" id="add index user_auth_token.revoked_at" duration=654.016µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.707933583Z level=info msg="Executing migration" id="create cache_data table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.708537468Z level=info msg="Migration successfully executed" id="create cache_data table" duration=603.735µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.710088137Z level=info msg="Executing migration" id="add unique index cache_data.cache_key"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.710706393Z level=info msg="Migration successfully executed" id="add unique index cache_data.cache_key" duration=618.016µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.712107198Z level=info msg="Executing migration" id="create short_url table v1"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.712711114Z level=info msg="Migration successfully executed" id="create short_url table v1" duration=604.015µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.714294703Z level=info msg="Executing migration" id="add index short_url.org_id-uid"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.71495257Z level=info msg="Migration successfully executed" id="add index short_url.org_id-uid" duration=680.447µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.716325394Z level=info msg="Executing migration" id="alter table short_url alter column created_by type to bigint"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.716367125Z level=info msg="Migration successfully executed" id="alter table short_url alter column created_by type to bigint" duration=42.351µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.717663328Z level=info msg="Executing migration" id="delete alert_definition table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.71771936Z level=info msg="Migration successfully executed" id="delete alert_definition table" duration=56.332µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.719119925Z level=info msg="Executing migration" id="recreate alert_definition table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.71971099Z level=info msg="Migration successfully executed" id="recreate alert_definition table" duration=590.785µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.721234898Z level=info msg="Executing migration" id="add index in alert_definition on org_id and title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.722276824Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and title columns" duration=1.038966ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.724142301Z level=info msg="Executing migration" id="add index in alert_definition on org_id and uid columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.725209758Z level=info msg="Migration successfully executed" id="add index in alert_definition on org_id and uid columns" duration=1.067227ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.72688859Z level=info msg="Executing migration" id="alter alert_definition table data column to mediumtext in mysql"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.726953052Z level=info msg="Migration successfully executed" id="alter alert_definition table data column to mediumtext in mysql" duration=67.262µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.728810319Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.729781813Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and title columns" duration=971.084µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.735284062Z level=info msg="Executing migration" id="drop index in alert_definition on org_id and uid columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.736379069Z level=info msg="Migration successfully executed" id="drop index in alert_definition on org_id and uid columns" duration=1.097587ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.738179655Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.739281472Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and title columns" duration=1.101547ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.741531029Z level=info msg="Executing migration" id="add unique index in alert_definition on org_id and uid columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.742727939Z level=info msg="Migration successfully executed" id="add unique index in alert_definition on org_id and uid columns" duration=1.19651ms
Mar  1 04:44:59 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.745107369Z level=info msg="Executing migration" id="Add column paused in alert_definition"
Mar  1 04:44:59 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.752788422Z level=info msg="Migration successfully executed" id="Add column paused in alert_definition" duration=7.680933ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.754873525Z level=info msg="Executing migration" id="drop alert_definition table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.755658424Z level=info msg="Migration successfully executed" id="drop alert_definition table" duration=785.639µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.757424109Z level=info msg="Executing migration" id="delete alert_definition_version table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.75748477Z level=info msg="Migration successfully executed" id="delete alert_definition_version table" duration=60.741µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.7590545Z level=info msg="Executing migration" id="recreate alert_definition_version table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.759750098Z level=info msg="Migration successfully executed" id="recreate alert_definition_version table" duration=695.087µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.761392049Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_id and version columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.762147778Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_id and version columns" duration=753.439µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.763871511Z level=info msg="Executing migration" id="add index in alert_definition_version table on alert_definition_uid and version columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.764576039Z level=info msg="Migration successfully executed" id="add index in alert_definition_version table on alert_definition_uid and version columns" duration=704.148µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.768016026Z level=info msg="Executing migration" id="alter alert_definition_version table data column to mediumtext in mysql"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.768061627Z level=info msg="Migration successfully executed" id="alter alert_definition_version table data column to mediumtext in mysql" duration=46.192µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.769747839Z level=info msg="Executing migration" id="drop alert_definition_version table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.770423666Z level=info msg="Migration successfully executed" id="drop alert_definition_version table" duration=675.677µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.772356985Z level=info msg="Executing migration" id="create alert_instance table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.773055542Z level=info msg="Migration successfully executed" id="create alert_instance table" duration=698.127µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.774690304Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, def_uid and current_state columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.775395371Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, def_uid and current_state columns" duration=704.838µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.777087424Z level=info msg="Executing migration" id="add index in alert_instance table on def_org_id, current_state columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.777902954Z level=info msg="Migration successfully executed" id="add index in alert_instance table on def_org_id, current_state columns" duration=817.08µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.781613218Z level=info msg="Executing migration" id="add column current_state_end to alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.786800898Z level=info msg="Migration successfully executed" id="add column current_state_end to alert_instance" duration=5.18707ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.788666765Z level=info msg="Executing migration" id="remove index def_org_id, def_uid, current_state on alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.789460075Z level=info msg="Migration successfully executed" id="remove index def_org_id, def_uid, current_state on alert_instance" duration=793.57µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.791117397Z level=info msg="Executing migration" id="remove index def_org_id, current_state on alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.791891887Z level=info msg="Migration successfully executed" id="remove index def_org_id, current_state on alert_instance" duration=774.55µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.793983919Z level=info msg="Executing migration" id="rename def_org_id to rule_org_id in alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.819800579Z level=info msg="Migration successfully executed" id="rename def_org_id to rule_org_id in alert_instance" duration=25.81159ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.832600591Z level=info msg="Executing migration" id="rename def_uid to rule_uid in alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.856224316Z level=info msg="Migration successfully executed" id="rename def_uid to rule_uid in alert_instance" duration=23.620745ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.858297728Z level=info msg="Executing migration" id="add index rule_org_id, rule_uid, current_state on alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.859099618Z level=info msg="Migration successfully executed" id="add index rule_org_id, rule_uid, current_state on alert_instance" duration=801.88µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.86077714Z level=info msg="Executing migration" id="add index rule_org_id, current_state on alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.861480388Z level=info msg="Migration successfully executed" id="add index rule_org_id, current_state on alert_instance" duration=703.488µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.863224312Z level=info msg="Executing migration" id="add current_reason column related to current_state"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.867409077Z level=info msg="Migration successfully executed" id="add current_reason column related to current_state" duration=4.184645ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.869015218Z level=info msg="Executing migration" id="add result_fingerprint column to alert_instance"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.872844174Z level=info msg="Migration successfully executed" id="add result_fingerprint column to alert_instance" duration=3.829046ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.876189298Z level=info msg="Executing migration" id="create alert_rule table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.876860015Z level=info msg="Migration successfully executed" id="create alert_rule table" duration=670.607µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.879068041Z level=info msg="Executing migration" id="add index in alert_rule on org_id and title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.87984173Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and title columns" duration=772.969µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.882355383Z level=info msg="Executing migration" id="add index in alert_rule on org_id and uid columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.883190334Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id and uid columns" duration=835.191µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.886151349Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespace_uid, group_uid columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.886895598Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespace_uid, group_uid columns" duration=743.969µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.889041362Z level=info msg="Executing migration" id="alter alert_rule table data column to mediumtext in mysql"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.889088963Z level=info msg="Migration successfully executed" id="alter alert_rule table data column to mediumtext in mysql" duration=49.521µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.890514079Z level=info msg="Executing migration" id="add column for to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.894582191Z level=info msg="Migration successfully executed" id="add column for to alert_rule" duration=4.064772ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.896222203Z level=info msg="Executing migration" id="add column annotations to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.900337146Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule" duration=4.115284ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.901849874Z level=info msg="Executing migration" id="add column labels to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.905753402Z level=info msg="Migration successfully executed" id="add column labels to alert_rule" duration=3.903208ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.90722643Z level=info msg="Executing migration" id="remove unique index from alert_rule on org_id, title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.907901036Z level=info msg="Migration successfully executed" id="remove unique index from alert_rule on org_id, title columns" duration=674.207µs
Mar  1 04:44:59 np0005634532 systemd[1]: Starting Ceph haproxy.rgw.default.compute-0.hyuwxv for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.909313032Z level=info msg="Executing migration" id="add index in alert_rule on org_id, namespase_uid and title columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.91001069Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, namespase_uid and title columns" duration=682.277µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.911364874Z level=info msg="Executing migration" id="add dashboard_uid column to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.915292553Z level=info msg="Migration successfully executed" id="add dashboard_uid column to alert_rule" duration=3.926888ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.916633946Z level=info msg="Executing migration" id="add panel_id column to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.920493053Z level=info msg="Migration successfully executed" id="add panel_id column to alert_rule" duration=3.858687ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.922049333Z level=info msg="Executing migration" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.922788341Z level=info msg="Migration successfully executed" id="add index in alert_rule on org_id, dashboard_uid and panel_id columns" duration=738.468µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.924432553Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.928503575Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule" duration=4.066482ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.930065744Z level=info msg="Executing migration" id="add is_paused column to alert_rule table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.934199488Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule table" duration=4.136204ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.935780398Z level=info msg="Executing migration" id="fix is_paused column for alert_rule table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.93583495Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule table" duration=54.782µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.937496361Z level=info msg="Executing migration" id="create alert_rule_version table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.938500767Z level=info msg="Migration successfully executed" id="create alert_rule_version table" duration=1.003946ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.939993084Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.940888497Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_uid and version columns" duration=894.883µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.942446886Z level=info msg="Executing migration" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.94338136Z level=info msg="Migration successfully executed" id="add index in alert_rule_version table on rule_org_id, rule_namespace_uid and rule_group columns" duration=935.833µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.945112363Z level=info msg="Executing migration" id="alter alert_rule_version table data column to mediumtext in mysql"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.945166905Z level=info msg="Migration successfully executed" id="alter alert_rule_version table data column to mediumtext in mysql" duration=55.252µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.946804206Z level=info msg="Executing migration" id="add column for to alert_rule_version"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.951132225Z level=info msg="Migration successfully executed" id="add column for to alert_rule_version" duration=4.327139ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.953068933Z level=info msg="Executing migration" id="add column annotations to alert_rule_version"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.957537536Z level=info msg="Migration successfully executed" id="add column annotations to alert_rule_version" duration=4.470813ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.959039974Z level=info msg="Executing migration" id="add column labels to alert_rule_version"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.963229879Z level=info msg="Migration successfully executed" id="add column labels to alert_rule_version" duration=4.188535ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.964899211Z level=info msg="Executing migration" id="add rule_group_idx column to alert_rule_version"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.969118287Z level=info msg="Migration successfully executed" id="add rule_group_idx column to alert_rule_version" duration=4.217476ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.970604785Z level=info msg="Executing migration" id="add is_paused column to alert_rule_versions table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.97477608Z level=info msg="Migration successfully executed" id="add is_paused column to alert_rule_versions table" duration=4.170645ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.976354129Z level=info msg="Executing migration" id="fix is_paused column for alert_rule_version table"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.976403811Z level=info msg="Migration successfully executed" id="fix is_paused column for alert_rule_version table" duration=49.862µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.977914119Z level=info msg="Executing migration" id=create_alert_configuration_table
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.978511094Z level=info msg="Migration successfully executed" id=create_alert_configuration_table duration=597.165µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.980264728Z level=info msg="Executing migration" id="Add column default in alert_configuration"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.984509275Z level=info msg="Migration successfully executed" id="Add column default in alert_configuration" duration=4.243477ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.986543296Z level=info msg="Executing migration" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.986599247Z level=info msg="Migration successfully executed" id="alert alert_configuration alertmanager_configuration column from TEXT to MEDIUMTEXT if mysql" duration=55.931µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.988348151Z level=info msg="Executing migration" id="add column org_id in alert_configuration"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.992976958Z level=info msg="Migration successfully executed" id="add column org_id in alert_configuration" duration=4.628747ms
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.994912077Z level=info msg="Executing migration" id="add index in alert_configuration table on org_id column"
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.995682016Z level=info msg="Migration successfully executed" id="add index in alert_configuration table on org_id column" duration=769.619µs
Mar  1 04:44:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:44:59.997240895Z level=info msg="Executing migration" id="add configuration_hash column to alert_configuration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.001696867Z level=info msg="Migration successfully executed" id="add configuration_hash column to alert_configuration" duration=4.455632ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.003170514Z level=info msg="Executing migration" id=create_ngalert_configuration_table
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.003729188Z level=info msg="Migration successfully executed" id=create_ngalert_configuration_table duration=558.384µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.005189685Z level=info msg="Executing migration" id="add index in ngalert_configuration on org_id column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.005849972Z level=info msg="Migration successfully executed" id="add index in ngalert_configuration on org_id column" duration=660.057µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.007250047Z level=info msg="Executing migration" id="add column send_alerts_to in ngalert_configuration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.011532415Z level=info msg="Migration successfully executed" id="add column send_alerts_to in ngalert_configuration" duration=4.281418ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.013145386Z level=info msg="Executing migration" id="create provenance_type table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.01370032Z level=info msg="Migration successfully executed" id="create provenance_type table" duration=555.413µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.016951951Z level=info msg="Executing migration" id="add index to uniquify (record_key, record_type, org_id) columns"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.01767734Z level=info msg="Migration successfully executed" id="add index to uniquify (record_key, record_type, org_id) columns" duration=725.129µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.019102236Z level=info msg="Executing migration" id="create alert_image table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.01968119Z level=info msg="Migration successfully executed" id="create alert_image table" duration=578.325µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.021098506Z level=info msg="Executing migration" id="add unique index on token to alert_image table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.022115561Z level=info msg="Migration successfully executed" id="add unique index on token to alert_image table" duration=1.016805ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.023701191Z level=info msg="Executing migration" id="support longer URLs in alert_image table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.023751342Z level=info msg="Migration successfully executed" id="support longer URLs in alert_image table" duration=50.461µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.025516297Z level=info msg="Executing migration" id=create_alert_configuration_history_table
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.026177614Z level=info msg="Migration successfully executed" id=create_alert_configuration_history_table duration=661.047µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.02764301Z level=info msg="Executing migration" id="drop non-unique orgID index on alert_configuration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.028329818Z level=info msg="Migration successfully executed" id="drop non-unique orgID index on alert_configuration" duration=686.618µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.029869166Z level=info msg="Executing migration" id="drop unique orgID index on alert_configuration if exists"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.030135913Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop unique orgID index on alert_configuration if exists"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.031908078Z level=info msg="Executing migration" id="extract alertmanager configuration history to separate table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.032271127Z level=info msg="Migration successfully executed" id="extract alertmanager configuration history to separate table" duration=360.709µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.03397474Z level=info msg="Executing migration" id="add unique index on orgID to alert_configuration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.034884313Z level=info msg="Migration successfully executed" id="add unique index on orgID to alert_configuration" duration=870.072µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.036554425Z level=info msg="Executing migration" id="add last_applied column to alert_configuration_history"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.042233678Z level=info msg="Migration successfully executed" id="add last_applied column to alert_configuration_history" duration=5.677013ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.044028663Z level=info msg="Executing migration" id="create library_element table v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.044826593Z level=info msg="Migration successfully executed" id="create library_element table v1" duration=798.38µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.046433233Z level=info msg="Executing migration" id="add index library_element org_id-folder_id-name-kind"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.047214913Z level=info msg="Migration successfully executed" id="add index library_element org_id-folder_id-name-kind" duration=783.18µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.048778992Z level=info msg="Executing migration" id="create library_element_connection table v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.049384818Z level=info msg="Migration successfully executed" id="create library_element_connection table v1" duration=609.006µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.051046649Z level=info msg="Executing migration" id="add index library_element_connection element_id-kind-connection_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.051830339Z level=info msg="Migration successfully executed" id="add index library_element_connection element_id-kind-connection_id" duration=783.66µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.053412489Z level=info msg="Executing migration" id="add unique index library_element org_id_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.054145908Z level=info msg="Migration successfully executed" id="add unique index library_element org_id_uid" duration=733.399µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.055700477Z level=info msg="Executing migration" id="increase max description length to 2048"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.055728277Z level=info msg="Migration successfully executed" id="increase max description length to 2048" duration=28.47µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.057112762Z level=info msg="Executing migration" id="alter library_element model to mediumtext"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.057171764Z level=info msg="Migration successfully executed" id="alter library_element model to mediumtext" duration=59.742µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.058585719Z level=info msg="Executing migration" id="clone move dashboard alerts to unified alerting"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.058834346Z level=info msg="Migration successfully executed" id="clone move dashboard alerts to unified alerting" duration=248.727µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.060700253Z level=info msg="Executing migration" id="create data_keys table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.061627546Z level=info msg="Migration successfully executed" id="create data_keys table" duration=926.994µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.063236716Z level=info msg="Executing migration" id="create secrets table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.063976135Z level=info msg="Migration successfully executed" id="create secrets table" duration=736.659µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.065886423Z level=info msg="Executing migration" id="rename data_keys name column to id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.093398176Z level=info msg="Migration successfully executed" id="rename data_keys name column to id" duration=27.506382ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.095276273Z level=info msg="Executing migration" id="add name column into data_keys"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.100155916Z level=info msg="Migration successfully executed" id="add name column into data_keys" duration=4.878403ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.102267479Z level=info msg="Executing migration" id="copy data_keys id column values into name"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.102391912Z level=info msg="Migration successfully executed" id="copy data_keys id column values into name" duration=128.373µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.10470265Z level=info msg="Executing migration" id="rename data_keys name column to label"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.130639593Z level=info msg="Migration successfully executed" id="rename data_keys name column to label" duration=25.925143ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.133403032Z level=info msg="Executing migration" id="rename data_keys id column back to name"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: Deploying daemon haproxy.rgw.default.compute-0.hyuwxv on compute-0
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Mar  1 04:45:00 np0005634532 podman[101072]: 2026-03-01 09:45:00.151811956 +0000 UTC m=+0.051051406 container create e65c62b66fc35c55106d577d435d67138ecac163d435cd587895a0b6a52cc955 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-rgw-default-compute-0-hyuwxv)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.162593867Z level=info msg="Migration successfully executed" id="rename data_keys id column back to name" duration=29.183905ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.165413528Z level=info msg="Executing migration" id="create kv_store table v1"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.16626387Z level=info msg="Migration successfully executed" id="create kv_store table v1" duration=848.312µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.169747577Z level=info msg="Executing migration" id="add index kv_store.org_id-namespace-key"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.170796514Z level=info msg="Migration successfully executed" id="add index kv_store.org_id-namespace-key" duration=1.048727ms
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.16( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.15( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.13( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.12( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.b( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.a( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.c( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.9( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.d( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.8( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.2( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.174473446Z level=info msg="Executing migration" id="update dashboard_uid and panel_id from existing annotations"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.174881826Z level=info msg="Migration successfully executed" id="update dashboard_uid and panel_id from existing annotations" duration=410.4µs
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.3( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.5( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.7( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.18( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1a( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1b( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1c( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1e( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1f( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1d( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.11( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=39/40 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.176924468Z level=info msg="Executing migration" id="create permission table"
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.16( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.15( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.13( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.177855001Z level=info msg="Migration successfully executed" id="create permission table" duration=926.923µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:45:00.178Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.003190451s
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.179471682Z level=info msg="Executing migration" id="add unique index permission.role_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.180400065Z level=info msg="Migration successfully executed" id="add unique index permission.role_id" duration=928.133µs
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.0( empty local-lis/les=54/55 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.c( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.d( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.9( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.b( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.2( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.5( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.7( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.6( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.18( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.11( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1d( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.10( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 55 pg[11.1f( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=39/39 les/c/f=40/40/0 sis=54) [0] r=0 lpr=54 pi=[39,54)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.181898403Z level=info msg="Executing migration" id="add unique index role_id_action_scope"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.182847357Z level=info msg="Migration successfully executed" id="add unique index role_id_action_scope" duration=948.754µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.186962201Z level=info msg="Executing migration" id="create role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.187642008Z level=info msg="Migration successfully executed" id="create role table" duration=679.617µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.193619578Z level=info msg="Executing migration" id="add column display_name"
Mar  1 04:45:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f471ec19139ba87ab5685a93442f28af9c190825b85d04a5ac61bcb444c07e8/merged/var/lib/haproxy supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.199274171Z level=info msg="Migration successfully executed" id="add column display_name" duration=5.653252ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.201293381Z level=info msg="Executing migration" id="add column group_name"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.206211085Z level=info msg="Migration successfully executed" id="add column group_name" duration=4.916974ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.208203965Z level=info msg="Executing migration" id="add index role.org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.208933694Z level=info msg="Migration successfully executed" id="add index role.org_id" duration=730.299µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.210716208Z level=info msg="Executing migration" id="add unique index role_org_id_name"
Mar  1 04:45:00 np0005634532 podman[101072]: 2026-03-01 09:45:00.211142209 +0000 UTC m=+0.110381639 container init e65c62b66fc35c55106d577d435d67138ecac163d435cd587895a0b6a52cc955 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-rgw-default-compute-0-hyuwxv)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.211688603Z level=info msg="Migration successfully executed" id="add unique index role_org_id_name" duration=971.995µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.213270903Z level=info msg="Executing migration" id="add index role_org_id_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.214254348Z level=info msg="Migration successfully executed" id="add index role_org_id_uid" duration=983.454µs
Mar  1 04:45:00 np0005634532 podman[101072]: 2026-03-01 09:45:00.214978926 +0000 UTC m=+0.114218346 container start e65c62b66fc35c55106d577d435d67138ecac163d435cd587895a0b6a52cc955 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-rgw-default-compute-0-hyuwxv)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.216011622Z level=info msg="Executing migration" id="create team role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.21672185Z level=info msg="Migration successfully executed" id="create team role table" duration=710.408µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.221151691Z level=info msg="Executing migration" id="add index team_role.org_id"
Mar  1 04:45:00 np0005634532 bash[101072]: e65c62b66fc35c55106d577d435d67138ecac163d435cd587895a0b6a52cc955
Mar  1 04:45:00 np0005634532 podman[101072]: 2026-03-01 09:45:00.130643113 +0000 UTC m=+0.029882563 image pull e85424b0d443f37ddd2dd8a3bb2ef6f18dd352b987723a921b64289023af2914 quay.io/ceph/haproxy:2.3
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.222322211Z level=info msg="Migration successfully executed" id="add index team_role.org_id" duration=1.17176ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.224012133Z level=info msg="Executing migration" id="add unique index team_role_org_id_team_id_role_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-rgw-default-compute-0-hyuwxv[101086]: [NOTICE] 059/094500 (2) : New worker #1 (4) forked
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.224973027Z level=info msg="Migration successfully executed" id="add unique index team_role_org_id_team_id_role_id" duration=960.714µs
Mar  1 04:45:00 np0005634532 systemd[1]: Started Ceph haproxy.rgw.default.compute-0.hyuwxv for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.226941887Z level=info msg="Executing migration" id="add index team_role.team_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.227951852Z level=info msg="Migration successfully executed" id="add index team_role.team_id" duration=1.008235ms
Mar  1 04:45:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.230074586Z level=info msg="Executing migration" id="create user role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.230809424Z level=info msg="Migration successfully executed" id="create user role table" duration=734.608µs
Mar  1 04:45:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:00.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.233242285Z level=info msg="Executing migration" id="add index user_role.org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.234940158Z level=info msg="Migration successfully executed" id="add index user_role.org_id" duration=1.697933ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.240710643Z level=info msg="Executing migration" id="add unique index user_role_org_id_user_id_role_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.241884403Z level=info msg="Migration successfully executed" id="add unique index user_role_org_id_user_id_role_id" duration=1.17227ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.243689808Z level=info msg="Executing migration" id="add index user_role.user_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.244654273Z level=info msg="Migration successfully executed" id="add index user_role.user_id" duration=964.595µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.246243643Z level=info msg="Executing migration" id="create builtin role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.247253458Z level=info msg="Migration successfully executed" id="create builtin role table" duration=1.009225ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.249096194Z level=info msg="Executing migration" id="add index builtin_role.role_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.249934376Z level=info msg="Migration successfully executed" id="add index builtin_role.role_id" duration=839.172µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.251445994Z level=info msg="Executing migration" id="add index builtin_role.name"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.252291275Z level=info msg="Migration successfully executed" id="add index builtin_role.name" duration=846.421µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.253653969Z level=info msg="Executing migration" id="Add column org_id to builtin_role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.259427235Z level=info msg="Migration successfully executed" id="Add column org_id to builtin_role table" duration=5.767855ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.261477106Z level=info msg="Executing migration" id="add index builtin_role.org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.262485021Z level=info msg="Migration successfully executed" id="add index builtin_role.org_id" duration=1.008875ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.264299177Z level=info msg="Executing migration" id="add unique index builtin_role_org_id_role_id_role"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.265172709Z level=info msg="Migration successfully executed" id="add unique index builtin_role_org_id_role_id_role" duration=873.052µs
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.266735279Z level=info msg="Executing migration" id="Remove unique index role_org_id_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.267626181Z level=info msg="Migration successfully executed" id="Remove unique index role_org_id_uid" duration=888.453µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.269250932Z level=info msg="Executing migration" id="add unique index role.uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.26998201Z level=info msg="Migration successfully executed" id="add unique index role.uid" duration=730.768µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.272964185Z level=info msg="Executing migration" id="create seed assignment table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.273615192Z level=info msg="Migration successfully executed" id="create seed assignment table" duration=650.617µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.2767321Z level=info msg="Executing migration" id="add unique index builtin_role_role_name"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.277740465Z level=info msg="Migration successfully executed" id="add unique index builtin_role_role_name" duration=1.008015ms
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.279761566Z level=info msg="Executing migration" id="add column hidden to role table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.285344307Z level=info msg="Migration successfully executed" id="add column hidden to role table" duration=5.581751ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.286765533Z level=info msg="Executing migration" id="permission kind migration"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.294742273Z level=info msg="Migration successfully executed" id="permission kind migration" duration=7.97704ms
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.296674192Z level=info msg="Executing migration" id="permission attribute migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.304585871Z level=info msg="Migration successfully executed" id="permission attribute migration" duration=7.916169ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.306226123Z level=info msg="Executing migration" id="permission identifier migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.314151782Z level=info msg="Migration successfully executed" id="permission identifier migration" duration=7.92233ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.315607779Z level=info msg="Executing migration" id="add permission identifier index"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.316736037Z level=info msg="Migration successfully executed" id="add permission identifier index" duration=1.128258ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.318694706Z level=info msg="Executing migration" id="add permission action scope role_id index"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.319904397Z level=info msg="Migration successfully executed" id="add permission action scope role_id index" duration=1.208551ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.321577629Z level=info msg="Executing migration" id="remove permission role_id action scope index"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.322646366Z level=info msg="Migration successfully executed" id="remove permission role_id action scope index" duration=1.068657ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.324147964Z level=info msg="Executing migration" id="create query_history table v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.325051456Z level=info msg="Migration successfully executed" id="create query_history table v1" duration=903.113µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.326559864Z level=info msg="Executing migration" id="add index query_history.org_id-created_by-datasource_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.327658582Z level=info msg="Migration successfully executed" id="add index query_history.org_id-created_by-datasource_uid" duration=1.098098ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.329301253Z level=info msg="Executing migration" id="alter table query_history alter column created_by type to bigint"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.329364405Z level=info msg="Migration successfully executed" id="alter table query_history alter column created_by type to bigint" duration=64.012µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.330883423Z level=info msg="Executing migration" id="rbac disabled migrator"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.330913424Z level=info msg="Migration successfully executed" id="rbac disabled migrator" duration=30.681µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.332563945Z level=info msg="Executing migration" id="teams permissions migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.332976556Z level=info msg="Migration successfully executed" id="teams permissions migration" duration=412.921µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.335364486Z level=info msg="Executing migration" id="dashboard permissions"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.33593606Z level=info msg="Migration successfully executed" id="dashboard permissions" duration=572.514µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.337696695Z level=info msg="Executing migration" id="dashboard permissions uid scopes"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.338149076Z level=info msg="Migration successfully executed" id="dashboard permissions uid scopes" duration=450.511µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.344400363Z level=info msg="Executing migration" id="drop managed folder create actions"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.344545397Z level=info msg="Migration successfully executed" id="drop managed folder create actions" duration=145.444µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.346758123Z level=info msg="Executing migration" id="alerting notification permissions"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.347241925Z level=info msg="Migration successfully executed" id="alerting notification permissions" duration=484.132µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.350320422Z level=info msg="Executing migration" id="create query_history_star table v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.351847281Z level=info msg="Migration successfully executed" id="create query_history_star table v1" duration=1.531459ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.353891342Z level=info msg="Executing migration" id="add index query_history.user_id-query_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.355705528Z level=info msg="Migration successfully executed" id="add index query_history.user_id-query_uid" duration=1.814476ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.357867652Z level=info msg="Executing migration" id="add column org_id in query_history_star"
Mar  1 04:45:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:00 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon haproxy.rgw.default.compute-2.frcifw on compute-2
Mar  1 04:45:00 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon haproxy.rgw.default.compute-2.frcifw on compute-2
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.371511256Z level=info msg="Migration successfully executed" id="add column org_id in query_history_star" duration=13.642954ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.373789433Z level=info msg="Executing migration" id="alter table query_history_star_mig column user_id type to bigint"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.373893406Z level=info msg="Migration successfully executed" id="alter table query_history_star_mig column user_id type to bigint" duration=105.613µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.375962008Z level=info msg="Executing migration" id="create correlation table v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.377775063Z level=info msg="Migration successfully executed" id="create correlation table v1" duration=1.812085ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.383329693Z level=info msg="Executing migration" id="add index correlations.uid"
Mar  1 04:45:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v48: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 3 op/s
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.385271112Z level=info msg="Migration successfully executed" id="add index correlations.uid" duration=1.940679ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.387467507Z level=info msg="Executing migration" id="add index correlations.source_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.389349985Z level=info msg="Migration successfully executed" id="add index correlations.source_uid" duration=1.885378ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.391224902Z level=info msg="Executing migration" id="add correlation config column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.404891996Z level=info msg="Migration successfully executed" id="add correlation config column" duration=13.666054ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.406874876Z level=info msg="Executing migration" id="drop index IDX_correlation_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.40861956Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_uid - v1" duration=1.746734ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.410502727Z level=info msg="Executing migration" id="drop index IDX_correlation_source_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.412149279Z level=info msg="Migration successfully executed" id="drop index IDX_correlation_source_uid - v1" duration=1.646232ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.413980975Z level=info msg="Executing migration" id="Rename table correlation to correlation_tmp_qwerty - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.443965919Z level=info msg="Migration successfully executed" id="Rename table correlation to correlation_tmp_qwerty - v1" duration=29.985364ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.446068872Z level=info msg="Executing migration" id="create correlation v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.44716862Z level=info msg="Migration successfully executed" id="create correlation v2" duration=1.101918ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.449019037Z level=info msg="Executing migration" id="create index IDX_correlation_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.450053813Z level=info msg="Migration successfully executed" id="create index IDX_correlation_uid - v2" duration=1.034196ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.451666203Z level=info msg="Executing migration" id="create index IDX_correlation_source_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.452763651Z level=info msg="Migration successfully executed" id="create index IDX_correlation_source_uid - v2" duration=1.097658ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.454604687Z level=info msg="Executing migration" id="create index IDX_correlation_org_id - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.455658704Z level=info msg="Migration successfully executed" id="create index IDX_correlation_org_id - v2" duration=1.053777ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.457633913Z level=info msg="Executing migration" id="copy correlation v1 to v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.458036024Z level=info msg="Migration successfully executed" id="copy correlation v1 to v2" duration=401.901µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.459898921Z level=info msg="Executing migration" id="drop correlation_tmp_qwerty"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.460692481Z level=info msg="Migration successfully executed" id="drop correlation_tmp_qwerty" duration=793.229µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.462512086Z level=info msg="Executing migration" id="add provisioning column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.470123348Z level=info msg="Migration successfully executed" id="add provisioning column" duration=7.610652ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.47180882Z level=info msg="Executing migration" id="create entity_events table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.472632791Z level=info msg="Migration successfully executed" id="create entity_events table" duration=825.211µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:00 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45040016a0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.474621341Z level=info msg="Executing migration" id="create dashboard public config v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.475639447Z level=info msg="Migration successfully executed" id="create dashboard public config v1" duration=1.018116ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.477664318Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.478100199Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index UQE_dashboard_public_config_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.480111569Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.480513809Z level=warn msg="Skipping migration: Already executed, but not recorded in migration log" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.482342955Z level=info msg="Executing migration" id="Drop old dashboard public config table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.483096954Z level=info msg="Migration successfully executed" id="Drop old dashboard public config table" duration=753.309µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.486351006Z level=info msg="Executing migration" id="recreate dashboard public config v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.48731001Z level=info msg="Migration successfully executed" id="recreate dashboard public config v1" duration=958.704µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.489490905Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.490590003Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v1" duration=1.098128ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.492502701Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.493563128Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v1" duration=1.060077ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.49525511Z level=info msg="Executing migration" id="drop index UQE_dashboard_public_config_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.496268016Z level=info msg="Migration successfully executed" id="drop index UQE_dashboard_public_config_uid - v2" duration=1.012616ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.498196464Z level=info msg="Executing migration" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.49919354Z level=info msg="Migration successfully executed" id="drop index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=997.006µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.500979745Z level=info msg="Executing migration" id="Drop public config table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.501732133Z level=info msg="Migration successfully executed" id="Drop public config table" duration=752.178µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.503482227Z level=info msg="Executing migration" id="Recreate dashboard public config v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.504501953Z level=info msg="Migration successfully executed" id="Recreate dashboard public config v2" duration=1.019246ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.50637277Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.507442357Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_uid - v2" duration=1.069267ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.509292584Z level=info msg="Executing migration" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.510383031Z level=info msg="Migration successfully executed" id="create index IDX_dashboard_public_config_org_id_dashboard_uid - v2" duration=1.090307ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.512128315Z level=info msg="Executing migration" id="create index UQE_dashboard_public_config_access_token - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.513175582Z level=info msg="Migration successfully executed" id="create index UQE_dashboard_public_config_access_token - v2" duration=1.047106ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.514870214Z level=info msg="Executing migration" id="Rename table dashboard_public_config to dashboard_public - v2"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.540085799Z level=info msg="Migration successfully executed" id="Rename table dashboard_public_config to dashboard_public - v2" duration=25.215025ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.542232303Z level=info msg="Executing migration" id="add annotations_enabled column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.550276835Z level=info msg="Migration successfully executed" id="add annotations_enabled column" duration=8.044312ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.551954888Z level=info msg="Executing migration" id="add time_selection_enabled column"
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.559759634Z level=info msg="Migration successfully executed" id="add time_selection_enabled column" duration=7.804047ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.56158524Z level=info msg="Executing migration" id="delete orphaned public dashboards"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.561796375Z level=info msg="Migration successfully executed" id="delete orphaned public dashboards" duration=211.045µs
Mar  1 04:45:00 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.563650002Z level=info msg="Executing migration" id="add share column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.57150565Z level=info msg="Migration successfully executed" id="add share column" duration=7.853998ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.573169472Z level=info msg="Executing migration" id="backfill empty share column fields with default of public"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.573317595Z level=info msg="Migration successfully executed" id="backfill empty share column fields with default of public" duration=148.733µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.574895445Z level=info msg="Executing migration" id="create file table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.575599833Z level=info msg="Migration successfully executed" id="create file table" duration=704.258µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.577182863Z level=info msg="Executing migration" id="file table idx: path natural pk"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.577926761Z level=info msg="Migration successfully executed" id="file table idx: path natural pk" duration=743.578µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.579285226Z level=info msg="Executing migration" id="file table idx: parent_folder_path_hash fast folder retrieval"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.580047395Z level=info msg="Migration successfully executed" id="file table idx: parent_folder_path_hash fast folder retrieval" duration=761.989µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.58144661Z level=info msg="Executing migration" id="create file_meta table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.582034755Z level=info msg="Migration successfully executed" id="create file_meta table" duration=587.745µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.585133203Z level=info msg="Executing migration" id="file table idx: path key"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.585886702Z level=info msg="Migration successfully executed" id="file table idx: path key" duration=753.149µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.587479802Z level=info msg="Executing migration" id="set path collation in file table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.587530043Z level=info msg="Migration successfully executed" id="set path collation in file table" duration=50.671µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.589180675Z level=info msg="Executing migration" id="migrate contents column to mediumblob for MySQL"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.589222966Z level=info msg="Migration successfully executed" id="migrate contents column to mediumblob for MySQL" duration=42.481µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.590700273Z level=info msg="Executing migration" id="managed permissions migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.591083162Z level=info msg="Migration successfully executed" id="managed permissions migration" duration=383.159µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.592608611Z level=info msg="Executing migration" id="managed folder permissions alert actions migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.592753915Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions migration" duration=145.583µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.59455304Z level=info msg="Executing migration" id="RBAC action name migrator"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.595476893Z level=info msg="Migration successfully executed" id="RBAC action name migrator" duration=923.873µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.597216777Z level=info msg="Executing migration" id="Add UID column to playlist"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.60292321Z level=info msg="Migration successfully executed" id="Add UID column to playlist" duration=5.706183ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.604872549Z level=info msg="Executing migration" id="Update uid column values in playlist"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.604988132Z level=info msg="Migration successfully executed" id="Update uid column values in playlist" duration=115.603µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.607913046Z level=info msg="Executing migration" id="Add index for uid in playlist"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.608745927Z level=info msg="Migration successfully executed" id="Add index for uid in playlist" duration=832.651µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.610212774Z level=info msg="Executing migration" id="update group index for alert rules"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.610483841Z level=info msg="Migration successfully executed" id="update group index for alert rules" duration=271.367µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.6128431Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.612985564Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated migration" duration=140.404µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.614469061Z level=info msg="Executing migration" id="admin only folder/dashboard permission"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.614784489Z level=info msg="Migration successfully executed" id="admin only folder/dashboard permission" duration=315.308µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.61640105Z level=info msg="Executing migration" id="add action column to seed_assignment"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.622101093Z level=info msg="Migration successfully executed" id="add action column to seed_assignment" duration=5.699683ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.623718064Z level=info msg="Executing migration" id="add scope column to seed_assignment"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.629374186Z level=info msg="Migration successfully executed" id="add scope column to seed_assignment" duration=5.655892ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.630895065Z level=info msg="Executing migration" id="remove unique index builtin_role_role_name before nullable update"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.631734996Z level=info msg="Migration successfully executed" id="remove unique index builtin_role_role_name before nullable update" duration=839.452µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.63347446Z level=info msg="Executing migration" id="update seed_assignment role_name column to nullable"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.69784875Z level=info msg="Migration successfully executed" id="update seed_assignment role_name column to nullable" duration=64.368521ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.699904452Z level=info msg="Executing migration" id="add unique index builtin_role_name back"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.700775103Z level=info msg="Migration successfully executed" id="add unique index builtin_role_name back" duration=870.631µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.702307792Z level=info msg="Executing migration" id="add unique index builtin_role_action_scope"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.703093802Z level=info msg="Migration successfully executed" id="add unique index builtin_role_action_scope" duration=785.5µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.704748833Z level=info msg="Executing migration" id="add primary key to seed_assigment"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.723979727Z level=info msg="Migration successfully executed" id="add primary key to seed_assigment" duration=19.226104ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.725973488Z level=info msg="Executing migration" id="add origin column to seed_assignment"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.732141453Z level=info msg="Migration successfully executed" id="add origin column to seed_assignment" duration=6.170985ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.734100832Z level=info msg="Executing migration" id="add origin to plugin seed_assignment"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.734354859Z level=info msg="Migration successfully executed" id="add origin to plugin seed_assignment" duration=253.907µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.736193665Z level=info msg="Executing migration" id="prevent seeding OnCall access"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.736344079Z level=info msg="Migration successfully executed" id="prevent seeding OnCall access" duration=149.724µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.738120803Z level=info msg="Executing migration" id="managed folder permissions alert actions repeated fixed migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.738296728Z level=info msg="Migration successfully executed" id="managed folder permissions alert actions repeated fixed migration" duration=175.635µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.740281368Z level=info msg="Executing migration" id="managed folder permissions library panel actions migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.740445382Z level=info msg="Migration successfully executed" id="managed folder permissions library panel actions migration" duration=162.594µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.742151145Z level=info msg="Executing migration" id="migrate external alertmanagers to datsourcse"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.742331209Z level=info msg="Migration successfully executed" id="migrate external alertmanagers to datsourcse" duration=179.444µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.744167026Z level=info msg="Executing migration" id="create folder table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.744950015Z level=info msg="Migration successfully executed" id="create folder table" duration=782.889µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.74671421Z level=info msg="Executing migration" id="Add index for parent_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.747671984Z level=info msg="Migration successfully executed" id="Add index for parent_uid" duration=957.414µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.749202732Z level=info msg="Executing migration" id="Add unique index for folder.uid and folder.org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.750083285Z level=info msg="Migration successfully executed" id="Add unique index for folder.uid and folder.org_id" duration=880.482µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.751563512Z level=info msg="Executing migration" id="Update folder title length"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.751589302Z level=info msg="Migration successfully executed" id="Update folder title length" duration=26.45µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.753099531Z level=info msg="Executing migration" id="Add unique index for folder.title and folder.parent_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.753950312Z level=info msg="Migration successfully executed" id="Add unique index for folder.title and folder.parent_uid" duration=851.912µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.755765898Z level=info msg="Executing migration" id="Remove unique index for folder.title and folder.parent_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.75664156Z level=info msg="Migration successfully executed" id="Remove unique index for folder.title and folder.parent_uid" duration=876.622µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.760056516Z level=info msg="Executing migration" id="Add unique index for title, parent_uid, and org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.760976239Z level=info msg="Migration successfully executed" id="Add unique index for title, parent_uid, and org_id" duration=919.443µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.762637701Z level=info msg="Executing migration" id="Sync dashboard and folder table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.762990629Z level=info msg="Migration successfully executed" id="Sync dashboard and folder table" duration=352.398µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.764483547Z level=info msg="Executing migration" id="Remove ghost folders from the folder table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.764714333Z level=info msg="Migration successfully executed" id="Remove ghost folders from the folder table" duration=230.586µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.76818186Z level=info msg="Executing migration" id="Remove unique index UQE_folder_uid_org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.769074763Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_uid_org_id" duration=892.303µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.770685143Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_uid"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.771576035Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_uid" duration=890.242µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.773096294Z level=info msg="Executing migration" id="Remove unique index UQE_folder_title_parent_uid_org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.774083199Z level=info msg="Migration successfully executed" id="Remove unique index UQE_folder_title_parent_uid_org_id" duration=986.125µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.775780121Z level=info msg="Executing migration" id="Add unique index UQE_folder_org_id_parent_uid_title"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.776836808Z level=info msg="Migration successfully executed" id="Add unique index UQE_folder_org_id_parent_uid_title" duration=1.056767ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.778406187Z level=info msg="Executing migration" id="Remove index IDX_folder_parent_uid_org_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.779386772Z level=info msg="Migration successfully executed" id="Remove index IDX_folder_parent_uid_org_id" duration=980.375µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.781112976Z level=info msg="Executing migration" id="create anon_device table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.781919226Z level=info msg="Migration successfully executed" id="create anon_device table" duration=806.07µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.78368678Z level=info msg="Executing migration" id="add unique index anon_device.device_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.784813029Z level=info msg="Migration successfully executed" id="add unique index anon_device.device_id" duration=1.129599ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.786488231Z level=info msg="Executing migration" id="add index anon_device.updated_at"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.787564538Z level=info msg="Migration successfully executed" id="add index anon_device.updated_at" duration=1.076057ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.789015054Z level=info msg="Executing migration" id="create signing_key table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.789843535Z level=info msg="Migration successfully executed" id="create signing_key table" duration=827.851µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.791394724Z level=info msg="Executing migration" id="add unique index signing_key.key_id"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.792433121Z level=info msg="Migration successfully executed" id="add unique index signing_key.key_id" duration=1.037416ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.793751124Z level=info msg="Executing migration" id="set legacy alert migration status in kvstore"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.794854151Z level=info msg="Migration successfully executed" id="set legacy alert migration status in kvstore" duration=1.102977ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.796456932Z level=info msg="Executing migration" id="migrate record of created folders during legacy migration to kvstore"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.796719878Z level=info msg="Migration successfully executed" id="migrate record of created folders during legacy migration to kvstore" duration=263.376µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.798317579Z level=info msg="Executing migration" id="Add folder_uid for dashboard"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.806112015Z level=info msg="Migration successfully executed" id="Add folder_uid for dashboard" duration=7.793276ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.807823538Z level=info msg="Executing migration" id="Populate dashboard folder_uid column"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.808456244Z level=info msg="Migration successfully executed" id="Populate dashboard folder_uid column" duration=633.786µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.809960532Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.811056689Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title" duration=1.095777ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.812443694Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_id_title"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.813405578Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_id_title" duration=961.454µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.814973278Z level=info msg="Executing migration" id="Delete unique index for dashboard_org_id_folder_uid_title"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.81585685Z level=info msg="Migration successfully executed" id="Delete unique index for dashboard_org_id_folder_uid_title" duration=883.832µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.817610824Z level=info msg="Executing migration" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.818552308Z level=info msg="Migration successfully executed" id="Add unique index for dashboard_org_id_folder_uid_title_is_folder" duration=943.544µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.819875481Z level=info msg="Executing migration" id="Restore index for dashboard_org_id_folder_id_title"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.820809135Z level=info msg="Migration successfully executed" id="Restore index for dashboard_org_id_folder_id_title" duration=933.244µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.822256301Z level=info msg="Executing migration" id="create sso_setting table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.823087112Z level=info msg="Migration successfully executed" id="create sso_setting table" duration=830.091µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.826040456Z level=info msg="Executing migration" id="copy kvstore migration status to each org"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.826650402Z level=info msg="Migration successfully executed" id="copy kvstore migration status to each org" duration=610.336µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.828130009Z level=info msg="Executing migration" id="add back entry for orgid=0 migrated status"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.828344214Z level=info msg="Migration successfully executed" id="add back entry for orgid=0 migrated status" duration=214.625µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.829865683Z level=info msg="Executing migration" id="alter kv_store.value to longtext"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.829914304Z level=info msg="Migration successfully executed" id="alter kv_store.value to longtext" duration=49.131µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.83135574Z level=info msg="Executing migration" id="add notification_settings column to alert_rule table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:00 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.838766707Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule table" duration=7.408927ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.840569082Z level=info msg="Executing migration" id="add notification_settings column to alert_rule_version table"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.847111767Z level=info msg="Migration successfully executed" id="add notification_settings column to alert_rule_version table" duration=6.542055ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.848892232Z level=info msg="Executing migration" id="removing scope from alert.instances:read action migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.849190579Z level=info msg="Migration successfully executed" id="removing scope from alert.instances:read action migration" duration=298.507µs
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=migrator t=2026-03-01T09:45:00.851459606Z level=info msg="migrations completed" performed=547 skipped=0 duration=2.117558259s
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore t=2026-03-01T09:45:00.852524773Z level=info msg="Created default organization"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=secrets t=2026-03-01T09:45:00.854629236Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=plugin.store t=2026-03-01T09:45:00.878721242Z level=info msg="Loading plugins..."
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=local.finder t=2026-03-01T09:45:00.918950375Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=plugin.store t=2026-03-01T09:45:00.918979936Z level=info msg="Plugins loaded" count=55 duration=40.259214ms
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=query_data t=2026-03-01T09:45:00.921493779Z level=info msg="Query Service initialization"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=live.push_http t=2026-03-01T09:45:00.92469303Z level=info msg="Live Push Gateway initialization"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.migration t=2026-03-01T09:45:00.928641599Z level=info msg=Starting
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.migration t=2026-03-01T09:45:00.929050299Z level=info msg="Applying transition" currentType=Legacy desiredType=UnifiedAlerting cleanOnDowngrade=false cleanOnUpgrade=false
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.migration orgID=1 t=2026-03-01T09:45:00.929440269Z level=info msg="Migrating alerts for organisation"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.migration orgID=1 t=2026-03-01T09:45:00.930074975Z level=info msg="Alerts found to migrate" alerts=0
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.migration t=2026-03-01T09:45:00.93145313Z level=info msg="Completed alerting migration"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.state.manager t=2026-03-01T09:45:00.947405901Z level=info msg="Running in alternative execution of Error/NoData mode"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=infra.usagestats.collector t=2026-03-01T09:45:00.949170696Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=provisioning.datasources t=2026-03-01T09:45:00.950095249Z level=info msg="inserting datasource from configuration" name=Loki uid=P8E80F9AEF21F6940
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=provisioning.alerting t=2026-03-01T09:45:00.959581428Z level=info msg="starting to provision alerting"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=provisioning.alerting t=2026-03-01T09:45:00.959602388Z level=info msg="finished to provision alerting"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=grafanaStorageLogger t=2026-03-01T09:45:00.959866885Z level=info msg="Storage starting"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.state.manager t=2026-03-01T09:45:00.960230464Z level=info msg="Warming state cache for startup"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.multiorg.alertmanager t=2026-03-01T09:45:00.960391678Z level=info msg="Starting MultiOrg Alertmanager"
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=http.server t=2026-03-01T09:45:00.964384329Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=http.server t=2026-03-01T09:45:00.964732317Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Mar  1 04:45:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:00.979184971Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:01.024775819Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=1 code="database is locked"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.state.manager t=2026-03-01T09:45:01.033415876Z level=info msg="State cache has been initialized" states=0 duration=73.182092ms
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ngalert.scheduler t=2026-03-01T09:45:01.033460147Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ticker t=2026-03-01T09:45:01.033511609Z level=info msg=starting first_tick=2026-03-01T09:45:10Z
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=provisioning.dashboard t=2026-03-01T09:45:01.038217097Z level=info msg="starting to provision dashboards"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:01.055787979Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=2 code="database is locked"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=grafana.update.checker t=2026-03-01T09:45:01.056910977Z level=info msg="Update check succeeded" duration=96.421257ms
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:01 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=plugins.update.checker t=2026-03-01T09:45:01.06058375Z level=info msg="Update check succeeded" duration=81.193324ms
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:01.066652683Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=3 code="database is locked"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:01.077778513Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=4 code="database is locked"
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]: {
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "user_id": "openstack",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "display_name": "openstack",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "email": "",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "suspended": 0,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "max_buckets": 1000,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "subusers": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "keys": [
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        {
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:            "user": "openstack",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:            "access_key": "9HO75SFE13O2BR6B4ERA",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:            "secret_key": "WdGV0WeNfjW6vi7GG8pyfoYoMcAf2UbNBqdGdDTd",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:            "active": true,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:            "create_date": "2026-03-01T09:45:00.183626Z"
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        }
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    ],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "swift_keys": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "caps": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "op_mask": "read, write, delete",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "default_placement": "",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "default_storage_class": "",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "placement_tags": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "bucket_quota": {
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "enabled": false,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "check_on_raw": false,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_size": -1,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_size_kb": 0,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_objects": -1
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    },
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "user_quota": {
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "enabled": false,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "check_on_raw": false,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_size": -1,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_size_kb": 0,
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:        "max_objects": -1
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    },
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "temp_url_keys": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "type": "rgw",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "mfa_ids": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "account_id": "",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "path": "/",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "create_date": "2026-03-01T09:45:00.182859Z",
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "tags": [],
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]:    "group_ids": []
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]: }
Mar  1 04:45:01 np0005634532 dazzling_swartz[100765]: 
Mar  1 04:45:01 np0005634532 systemd[1]: libpod-549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222.scope: Deactivated successfully.
Mar  1 04:45:01 np0005634532 podman[100750]: 2026-03-01 09:45:01.191189817 +0000 UTC m=+2.367441559 container died 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: Deploying daemon haproxy.rgw.default.compute-2.frcifw on compute-2
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=grafana-apiserver t=2026-03-01T09:45:01.317269021Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=grafana-apiserver t=2026-03-01T09:45:01.317653591Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Mar  1 04:45:01 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Mar  1 04:45:01 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Mar  1 04:45:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=provisioning.dashboard t=2026-03-01T09:45:01.770573021Z level=info msg="finished to provision dashboards"
Mar  1 04:45:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:01.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ingress.rgw.default/keepalived_password}] v 0)
Mar  1 04:45:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-0.kwkquz on compute-0
Mar  1 04:45:01 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-0.kwkquz on compute-0
Mar  1 04:45:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fdc797d01f335dec3332561252956382f280d9fbb13606e44c98e06b1669ab88-merged.mount: Deactivated successfully.
Mar  1 04:45:02 np0005634532 podman[100750]: 2026-03-01 09:45:02.059859872 +0000 UTC m=+3.236111604 container remove 549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222 (image=quay.io/ceph/ceph:v19, name=dazzling_swartz, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:02 np0005634532 systemd[1]: libpod-conmon-549defc04e33066541eddb969270eea89fecb3059b8fbd647d192c61d8afd222.scope: Deactivated successfully.
Mar  1 04:45:02 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:02 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:02 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:02 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:02.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v49: 353 pgs: 62 unknown, 291 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 2 op/s
Mar  1 04:45:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:02 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.540053738 +0000 UTC m=+0.058980366 container create 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, description=keepalived for Ceph, vcs-type=git, name=keepalived, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, version=2.2.4, distribution-scope=public, release=1793, build-date=2023-02-22T09:23:20, architecture=x86_64)
Mar  1 04:45:02 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.17 scrub starts
Mar  1 04:45:02 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.17 scrub ok
Mar  1 04:45:02 np0005634532 systemd[1]: Started libpod-conmon-9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23.scope.
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.517183772 +0000 UTC m=+0.036110410 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Mar  1 04:45:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.632651739 +0000 UTC m=+0.151578377 container init 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.k8s.display-name=Keepalived on RHEL 9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, description=keepalived for Ceph, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vcs-type=git, version=2.2.4, release=1793, com.redhat.component=keepalived-container, architecture=x86_64, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.640477666 +0000 UTC m=+0.159404294 container start 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.display-name=Keepalived on RHEL 9, com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, description=keepalived for Ceph, vcs-type=git, io.openshift.tags=Ceph keepalived, architecture=x86_64, vendor=Red Hat, Inc., release=1793, name=keepalived)
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.644561888 +0000 UTC m=+0.163488576 container attach 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vendor=Red Hat, Inc., description=keepalived for Ceph, release=1793, build-date=2023-02-22T09:23:20, version=2.2.4, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.openshift.tags=Ceph keepalived, architecture=x86_64)
Mar  1 04:45:02 np0005634532 boring_dewdney[101255]: 0 0
Mar  1 04:45:02 np0005634532 systemd[1]: libpod-9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23.scope: Deactivated successfully.
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.652400116 +0000 UTC m=+0.171326744 container died 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, build-date=2023-02-22T09:23:20, version=2.2.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, io.buildah.version=1.28.2)
Mar  1 04:45:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fc4f18751f9f45be8cb2f2145db73484cbf5640b88bbdd6d7d50ab8250608206-merged.mount: Deactivated successfully.
Mar  1 04:45:02 np0005634532 python3[101239]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_response mode=0644 validate_certs=False force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False url_username=None url_password=NOT_LOGGING_PARAMETER client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:45:02 np0005634532 podman[101232]: 2026-03-01 09:45:02.704781054 +0000 UTC m=+0.223707662 container remove 9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23 (image=quay.io/ceph/keepalived:2.2.4, name=boring_dewdney, release=1793, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, version=2.2.4, name=keepalived, com.redhat.component=keepalived-container, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Mar  1 04:45:02 np0005634532 systemd[1]: libpod-conmon-9d383dceaa83dd618e66359792f6316b6b6ab2543818a47ed5d9730c2cf15a23.scope: Deactivated successfully.
Mar  1 04:45:02 np0005634532 systemd[1]: Reloading.
Mar  1 04:45:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:02 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:02 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:49198] [GET] [200] [0.113s] [6.3K] [d84ac8c4-8ddb-43cf-b758-69d9f7e19fc3] /
Mar  1 04:45:02 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:45:02 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 22 completed events
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:03 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:03 np0005634532 systemd[1]: Reloading.
Mar  1 04:45:03 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:45:03 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: Deploying daemon keepalived.rgw.default.compute-0.kwkquz on compute-0
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:03 np0005634532 systemd[1]: Starting Ceph keepalived.rgw.default.compute-0.kwkquz for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:03 np0005634532 python3[101376]: ansible-ansible.builtin.get_url Invoked with url=http://192.168.122.100:8443 dest=/tmp/dash_http_response mode=0644 validate_certs=False username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER password=NOT_LOGGING_PARAMETER url_username=VALUE_SPECIFIED_IN_NO_LOG_PARAMETER url_password=NOT_LOGGING_PARAMETER force=False http_agent=ansible-httpget use_proxy=True force_basic_auth=False use_gssapi=False backup=False checksum= timeout=10 unredirected_headers=[] decompress=True use_netrc=True unsafe_writes=False client_cert=None client_key=None headers=None tmp_dest=None ciphers=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:49200] [GET] [200] [0.003s] [6.3K] [c700d22e-1ff2-4002-8f34-17a6945c7824] /
Mar  1 04:45:03 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Mar  1 04:45:03 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Mar  1 04:45:03 np0005634532 podman[101439]: 2026-03-01 09:45:03.61998813 +0000 UTC m=+0.059829257 container create f7603ce39b76f31b902318d26daa96a7ba7edea2cb943a4b7df2b4a21a185206 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, vcs-type=git, distribution-scope=public, name=keepalived, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, summary=Provides keepalived on RHEL 9 for Ceph., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4)
Mar  1 04:45:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/64aa3c1dc198b6c62dcd0ef2ca8d40c5ddaba785ef885c478924b94dea02658b/merged/etc/keepalived/keepalived.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:03 np0005634532 podman[101439]: 2026-03-01 09:45:03.679938719 +0000 UTC m=+0.119779916 container init f7603ce39b76f31b902318d26daa96a7ba7edea2cb943a4b7df2b4a21a185206 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz, architecture=x86_64, io.buildah.version=1.28.2, vcs-type=git, name=keepalived, io.k8s.display-name=Keepalived on RHEL 9, version=2.2.4, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, description=keepalived for Ceph, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, summary=Provides keepalived on RHEL 9 for Ceph., build-date=2023-02-22T09:23:20, distribution-scope=public, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Mar  1 04:45:03 np0005634532 podman[101439]: 2026-03-01 09:45:03.686108495 +0000 UTC m=+0.125949652 container start f7603ce39b76f31b902318d26daa96a7ba7edea2cb943a4b7df2b4a21a185206 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=keepalived, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, distribution-scope=public, description=keepalived for Ceph, architecture=x86_64, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.component=keepalived-container, vendor=Red Hat, Inc., vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9)
Mar  1 04:45:03 np0005634532 bash[101439]: f7603ce39b76f31b902318d26daa96a7ba7edea2cb943a4b7df2b4a21a185206
Mar  1 04:45:03 np0005634532 podman[101439]: 2026-03-01 09:45:03.599844683 +0000 UTC m=+0.039685830 image pull 4a3a1ff181d97c6dcfa9138ad76eb99fa2c1e840298461d5a7a56133bc05b9a2 quay.io/ceph/keepalived:2.2.4
Mar  1 04:45:03 np0005634532 systemd[1]: Started Ceph keepalived.rgw.default.compute-0.kwkquz for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Starting Keepalived v2.2.4 (08/21,2021)
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Running on Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 (built for Linux 5.14.0)
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Command line: '/usr/sbin/keepalived' '-n' '-l' '-f' '/etc/keepalived/keepalived.conf'
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Configuration file /etc/keepalived/keepalived.conf
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Failed to bind to process monitoring socket - errno 98 - Address already in use
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: NOTICE: setting config option max_auto_priority should result in better keepalived performance
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Starting VRRP child process, pid=4
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: Startup complete
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: (VI_0) Entering BACKUP STATE (init)
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:45:03 2026: (VI_0) Entering BACKUP STATE
Mar  1 04:45:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:03 2026: VRRP_Script(check_backend) succeeded
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:45:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:03.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.services.ingress] 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon keepalived.rgw.default.compute-2.rsugdm on compute-2
Mar  1 04:45:03 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon keepalived.rgw.default.compute-2.rsugdm on compute-2
Mar  1 04:45:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:45:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:04.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh[99540]: Sun Mar  1 09:45:04 2026: (VI_0) Entering MASTER STATE
Mar  1 04:45:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0)
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:45:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:04 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:04 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Mar  1 04:45:04 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Mar  1 04:45:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:04 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:05 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504002050 fd 37 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.189087) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305189194, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 6977, "num_deletes": 258, "total_data_size": 13827646, "memory_usage": 14554808, "flush_reason": "Manual Compaction"}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305245824, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 12389615, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 142, "largest_seqno": 7114, "table_properties": {"data_size": 12364525, "index_size": 15824, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 79044, "raw_average_key_size": 24, "raw_value_size": 12302154, "raw_average_value_size": 3777, "num_data_blocks": 697, "num_entries": 3257, "num_filter_entries": 3257, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358055, "oldest_key_time": 1772358055, "file_creation_time": 1772358305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 56797 microseconds, and 24802 cpu microseconds.
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.245890) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 12389615 bytes OK
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.245909) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.247219) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.247232) EVENT_LOG_v1 {"time_micros": 1772358305247228, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.247250) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 13796117, prev total WAL file size 13796117, number of live WAL files 2.
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.248813) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323538' seq:0, type:0; will stop at (end)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(11MB) 13(57KB) 8(1944B)]
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305248930, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 12450041, "oldest_snapshot_seqno": -1}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3072 keys, 12431895 bytes, temperature: kUnknown
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305309756, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 12431895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12407197, "index_size": 15949, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7685, "raw_key_size": 77876, "raw_average_key_size": 25, "raw_value_size": 12346487, "raw_average_value_size": 4019, "num_data_blocks": 702, "num_entries": 3072, "num_filter_entries": 3072, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772358305, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-2 interface br-ex
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: 192.168.122.2 is in 192.168.122.0/24 on compute-0 interface br-ex
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: Deploying daemon keepalived.rgw.default.compute-2.rsugdm on compute-2
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.310047) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 12431895 bytes
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.311616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 204.4 rd, 204.1 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(11.9, 0.0 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3366, records dropped: 294 output_compression: NoCompression
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.311637) EVENT_LOG_v1 {"time_micros": 1772358305311627, "job": 4, "event": "compaction_finished", "compaction_time_micros": 60915, "compaction_time_cpu_micros": 30225, "output_level": 6, "num_output_files": 1, "total_output_size": 12431895, "num_input_records": 3366, "num_output_records": 3072, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305313236, "job": 4, "event": "table_file_deletion", "file_number": 19}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305313284, "job": 4, "event": "table_file_deletion", "file_number": 13}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358305313316, "job": 4, "event": "table_file_deletion", "file_number": 8}
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:05.248664) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.1c( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.18( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.5( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.2( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.8( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.a( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.e( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.8( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.c( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.b( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.6( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.13( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.15( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.14( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.12( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.10( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.14( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627954483s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.731384277s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826284409s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.929733276s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.14( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627899170s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.731384277s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.17( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826040268s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.929733276s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.19( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.15( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.630398750s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.734573364s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.16( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825385094s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.929733276s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.16( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.630233765s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.734588623s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.15( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.630367279s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.734573364s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825472832s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.929885864s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.16( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.630202293s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.734588623s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.14( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825452805s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.929885864s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.16( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825316429s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.929733276s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.17( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629981041s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.734588623s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.17( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629957199s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.734588623s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[12.19( empty local-lis/les=0/0 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[10.1b( empty local-lis/les=0/0 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.10( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629146576s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.734634399s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.827742577s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933258057s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.13( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824402809s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.929916382s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.12( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.827703476s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933258057s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.13( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824343681s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.929916382s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.10( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629089355s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.734634399s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.3( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629370689s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735183716s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.2( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629287720s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735122681s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.3( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629353523s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735183716s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.827111244s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933029175s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.2( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629074097s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735122681s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.827010155s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933029175s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.8( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629046440s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735305786s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628907204s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735198975s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826688766s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933029175s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.8( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629026413s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735305786s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826664925s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933029175s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628858566s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735198975s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.9( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628715515s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735244751s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.9( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628692627s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735244751s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.11( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628725052s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735076904s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.a( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628608704s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735351562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.a( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628589630s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735351562s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.11( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628336906s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735076904s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826231956s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933303833s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.d( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628209114s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735366821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.d( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.628185272s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735366821s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.826198578s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933303833s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825983047s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933303833s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.f( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825955391s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933303833s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627774239s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735427856s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627844810s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735366821s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627735138s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735427856s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825737953s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933456421s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.8( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825712204s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933456421s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.627661705s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735366821s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825245857s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933364868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.4( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825209618s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933364868s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825156212s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933319092s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.3( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.825122833s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933319092s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.6( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.631031036s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.739562988s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.7( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824856758s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933380127s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.7( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824819565s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933380127s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.6( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.631002426s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.739562988s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.5( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626929283s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735549927s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.5( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824688911s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933364868s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.5( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626879692s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735549927s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626826286s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735641479s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1b( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626770020s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735641479s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824412346s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933425903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.19( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824386597s) [2] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933425903s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.5( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824495316s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933364868s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.19( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626531601s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735733032s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.19( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626516342s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735733032s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.4( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.626433372s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.735565186s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.824025154s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933471680s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1a( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823973656s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933471680s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823846817s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933471680s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1c( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823813438s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933471680s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823754311s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933425903s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1b( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823726654s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933425903s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1d( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823702812s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933532715s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629687309s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.739517212s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1d( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823679924s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933532715s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1f( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629639626s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.739517212s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823373795s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 active pruub 175.933471680s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.4( v 34'9 (0'0,34'9] local-lis/les=50/53 n=1 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.625717163s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.735565186s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[11.1e( empty local-lis/les=54/55 n=0 ec=54/39 lis/c=54/54 les/c/f=55/55/0 sis=56 pruub=10.823348999s) [1] r=-1 lpr=56 pi=[54,56)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 175.933471680s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629853249s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.740112305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.1c( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629831314s) [2] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.740112305s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.18( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629120827s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.739486694s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.12( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629973412s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 active pruub 173.740478516s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.12( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629956245s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.740478516s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 56 pg[8.18( v 34'9 (0'0,34'9] local-lis/les=50/53 n=0 ec=50/33 lis/c=50/50 les/c/f=53/53/0 sis=56 pruub=8.629043579s) [1] r=-1 lpr=56 pi=[50,56)/1 crt=34'9 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 173.739486694s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:05 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 664d5842-c7f8-4a7a-aadf-0426bc25da5d (Updating ingress.rgw.default deployment (+4 -> 4))
Mar  1 04:45:05 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 664d5842-c7f8-4a7a-aadf-0426bc25da5d (Updating ingress.rgw.default deployment (+4 -> 4)) in 7 seconds
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.ingress.rgw.default}] v 0)
Mar  1 04:45:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:05 np0005634532 ceph-mgr[76134]: [progress INFO root] update: starting ev 8662c686-7791-4551-82a5-947d8e12317d (Updating prometheus deployment (+1 -> 1))
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Mar  1 04:45:05 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Mar  1 04:45:05 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Deploying daemon prometheus.compute-0 on compute-0
Mar  1 04:45:05 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Deploying daemon prometheus.compute-0 on compute-0
Mar  1 04:45:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:05.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:06.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".nfs", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v52: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0)
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Mar  1 04:45:06 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.10( v 55'65 lc 48'46 (0'0,55'65] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=55'65 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.12( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.15( v 55'57 lc 55'56 (0'0,55'57] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=55'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.14( v 55'57 lc 55'56 (0'0,55'57] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=55'57 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.13( v 38'48 (0'0,38'48] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.6( v 48'63 lc 48'44 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.b( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.e( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.8( v 38'48 (0'0,38'48] local-lis/les=56/57 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.8( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.a( v 48'63 lc 0'0 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.5( v 38'48 (0'0,38'48] local-lis/les=56/57 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.2( v 38'48 (0'0,38'48] local-lis/les=56/57 n=1 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.19( v 38'48 (0'0,38'48] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.1b( v 38'48 (0'0,38'48] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.19( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[10.18( v 38'48 (0'0,38'48] local-lis/les=56/57 n=0 ec=52/37 lis/c=52/52 les/c/f=53/53/0 sis=56) [0] r=0 lpr=56 pi=[52,56)/1 crt=38'48 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.1c( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 57 pg[12.c( v 48'63 (0'0,48'63] local-lis/les=56/57 n=0 ec=54/46 lis/c=54/54 les/c/f=55/55/0 sis=56) [0] r=0 lpr=56 pi=[54,56)/1 crt=48'63 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:06 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 47 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:06 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:07 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-rgw-default-compute-0-kwkquz[101453]: Sun Mar  1 09:45:07 2026: (VI_0) Entering MASTER STATE
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: Deploying daemon prometheus.compute-0 on compute-0
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Mar  1 04:45:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Mar  1 04:45:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094507 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:45:07 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Mar  1 04:45:07 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Mar  1 04:45:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:08 np0005634532 ceph-mgr[76134]: [progress INFO root] Writing back 23 completed events
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0)
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:08 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 27abc68a-3b68-4975-8a82-e016d5ebdff6 (Global Recovery Event) in 10 seconds
Mar  1 04:45:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:08.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v55: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0)
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:08 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Mar  1 04:45:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:08 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:08 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.14 deep-scrub starts
Mar  1 04:45:08 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.14 deep-scrub ok
Mar  1 04:45:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:08 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Mar  1 04:45:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Mar  1 04:45:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Mar  1 04:45:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:09 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:09 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907884598s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.735549927s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907839775s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.735549927s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907202721s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.735351562s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907176018s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.735351562s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907209396s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.735687256s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.907191277s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.735687256s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.906846046s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.735534668s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.906771660s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.735534668s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.906535149s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.735763550s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.906455994s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.735763550s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.910299301s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.739624023s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.910172462s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.739624023s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.911145210s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.740875244s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.911129951s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.740875244s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.911171913s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 181.740844727s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 59 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=59 pruub=12.910976410s) [2] r=-1 lpr=59 pi=[52,59)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 181.740844727s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.106464376 +0000 UTC m=+2.913391902 volume create a2cd96b4c401b7379812e5835c9866ea1eb7dd1d28c74bbce22f9b61329707e0
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.113545204 +0000 UTC m=+2.920472730 container create b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.093794987 +0000 UTC m=+2.900722533 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Mar  1 04:45:09 np0005634532 systemd[1]: Started libpod-conmon-b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b.scope.
Mar  1 04:45:09 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0f568e9b0161a74af79091c5314a3f9df0186ca5dc8683eb2293d0cde16cd071/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.193438395 +0000 UTC m=+3.000365981 container init b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.202471253 +0000 UTC m=+3.009398819 container start b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 determined_vaughan[101813]: 65534 65534
Mar  1 04:45:09 np0005634532 systemd[1]: libpod-b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b.scope: Deactivated successfully.
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.20592318 +0000 UTC m=+3.012850736 container attach b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.206420682 +0000 UTC m=+3.013348198 container died b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0f568e9b0161a74af79091c5314a3f9df0186ca5dc8683eb2293d0cde16cd071-merged.mount: Deactivated successfully.
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.234586841 +0000 UTC m=+3.041514367 container remove b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b (image=quay.io/prometheus/prometheus:v2.51.0, name=determined_vaughan, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101560]: 2026-03-01 09:45:09.238647513 +0000 UTC m=+3.045575039 volume remove a2cd96b4c401b7379812e5835c9866ea1eb7dd1d28c74bbce22f9b61329707e0
Mar  1 04:45:09 np0005634532 systemd[1]: libpod-conmon-b212f37eead754fe9ee6971f09f123e62054719def95e77c34ccab825ead403b.scope: Deactivated successfully.
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.311316873 +0000 UTC m=+0.038338777 volume create 64cd7281c0a87436f04c6ea544bfc758cf7f67e8b4e53279fc7e0f701f2fdf57
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.317782635 +0000 UTC m=+0.044804539 container create 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 systemd[1]: Started libpod-conmon-4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0.scope.
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.293610977 +0000 UTC m=+0.020632901 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Mar  1 04:45:09 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/46f2526ea35e4fce8361a8200a98f5b7ae68eb5d6bc62ff697e339552b5a1e65/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.410506189 +0000 UTC m=+0.137528133 container init 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.415194737 +0000 UTC m=+0.142216651 container start 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 brave_archimedes[101848]: 65534 65534
Mar  1 04:45:09 np0005634532 systemd[1]: libpod-4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0.scope: Deactivated successfully.
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.4188947 +0000 UTC m=+0.145916614 container attach 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.419305591 +0000 UTC m=+0.146327505 container died 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-46f2526ea35e4fce8361a8200a98f5b7ae68eb5d6bc62ff697e339552b5a1e65-merged.mount: Deactivated successfully.
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.470194291 +0000 UTC m=+0.197216235 container remove 4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0 (image=quay.io/prometheus/prometheus:v2.51.0, name=brave_archimedes, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:09 np0005634532 podman[101830]: 2026-03-01 09:45:09.475534216 +0000 UTC m=+0.202556180 volume remove 64cd7281c0a87436f04c6ea544bfc758cf7f67e8b4e53279fc7e0f701f2fdf57
Mar  1 04:45:09 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Mar  1 04:45:09 np0005634532 systemd[1]: libpod-conmon-4f9c029f08a523924e097fc5a92bef491dd17117c16192eb29aff92fae042ed0.scope: Deactivated successfully.
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.6 scrub starts
Mar  1 04:45:09 np0005634532 systemd[1]: Reloading.
Mar  1 04:45:09 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.6 scrub ok
Mar  1 04:45:09 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:45:09 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:45:09 np0005634532 systemd[1]: Reloading.
Mar  1 04:45:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:09.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:09 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:45:09 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Mar  1 04:45:10 np0005634532 systemd[1]: Starting Ceph prometheus.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 60 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:10.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:10 np0005634532 podman[102005]: 2026-03-01 09:45:10.348057047 +0000 UTC m=+0.059308644 container create 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 353 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 298 B/s, 0 keys/s, 3 objects/s recovering
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0)
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Mar  1 04:45:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26dc9d6c54491aabc23da577e82a77933c8addd31db82a5cae6a610491640c2f/merged/prometheus supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26dc9d6c54491aabc23da577e82a77933c8addd31db82a5cae6a610491640c2f/merged/etc/prometheus supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:10 np0005634532 podman[102005]: 2026-03-01 09:45:10.322801271 +0000 UTC m=+0.034052928 image pull 1d3b7f56885b6dd623f1785be963aa9c195f86bc256ea454e8d02a7980b79c53 quay.io/prometheus/prometheus:v2.51.0
Mar  1 04:45:10 np0005634532 podman[102005]: 2026-03-01 09:45:10.417691989 +0000 UTC m=+0.128943636 container init 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:10 np0005634532 podman[102005]: 2026-03-01 09:45:10.423379863 +0000 UTC m=+0.134631470 container start 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:10 np0005634532 bash[102005]: 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1
Mar  1 04:45:10 np0005634532 systemd[1]: Started Ceph prometheus.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.473Z caller=main.go:617 level=info msg="Starting Prometheus Server" mode=server version="(version=2.51.0, branch=HEAD, revision=c05c15512acb675e3f6cd662a6727854e93fc024)"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.474Z caller=main.go:622 level=info build_context="(go=go1.22.1, platform=linux/amd64, user=root@b5723e458358, date=20240319-10:54:45, tags=netgo,builtinassets,stringlabels)"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.474Z caller=main.go:623 level=info host_details="(Linux 5.14.0-686.el9.x86_64 #1 SMP PREEMPT_DYNAMIC Thu Feb 19 10:49:27 UTC 2026 x86_64 compute-0 (none))"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.474Z caller=main.go:624 level=info fd_limits="(soft=1048576, hard=1048576)"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.474Z caller=main.go:625 level=info vm_limits="(soft=unlimited, hard=unlimited)"
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.a scrub starts
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.480Z caller=web.go:568 level=info component=web msg="Start listening for connections" address=192.168.122.100:9095
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.481Z caller=main.go:1129 level=info msg="Starting TSDB ..."
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.482Z caller=tls_config.go:313 level=info component=web msg="Listening on" address=192.168.122.100:9095
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.482Z caller=tls_config.go:316 level=info component=web msg="TLS is disabled." http2=false address=192.168.122.100:9095
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:10 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.489Z caller=head.go:616 level=info component=tsdb msg="Replaying on-disk memory mappable chunks if any"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.489Z caller=head.go:698 level=info component=tsdb msg="On-disk memory mappable chunks replay completed" duration=4.49µs
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.489Z caller=head.go:706 level=info component=tsdb msg="Replaying WAL, this may take a while"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.490Z caller=head.go:778 level=info component=tsdb msg="WAL segment loaded" segment=0 maxSegment=0
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.490Z caller=head.go:815 level=info component=tsdb msg="WAL replay completed" checkpoint_replay_duration=39.841µs wal_replay_duration=392.42µs wbl_replay_duration=210ns total_replay_duration=523.173µs
Mar  1 04:45:10 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.a scrub ok
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.493Z caller=main.go:1150 level=info fs_type=XFS_SUPER_MAGIC
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.493Z caller=main.go:1153 level=info msg="TSDB started"
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.493Z caller=main.go:1335 level=info msg="Loading configuration file" filename=/etc/prometheus/prometheus.yml
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.538Z caller=main.go:1372 level=info msg="Completed loading of configuration file" filename=/etc/prometheus/prometheus.yml totalDuration=45.846164ms db_storage=1.57µs remote_storage=1.97µs web_handler=700ns query_engine=1.77µs scrape=6.535085ms scrape_sd=277.377µs notify=66.342µs notify_sd=24.33µs rules=38.12485ms tracing=12.28µs
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.539Z caller=main.go:1114 level=info msg="Server is ready to receive web requests."
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0[102021]: ts=2026-03-01T09:45:10.539Z caller=manager.go:163 level=info component="rule manager" msg="Starting rule manager..."
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:10 np0005634532 ceph-mgr[76134]: [progress INFO root] complete: finished ev 8662c686-7791-4551-82a5-947d8e12317d (Updating prometheus deployment (+1 -> 1))
Mar  1 04:45:10 np0005634532 ceph-mgr[76134]: [progress INFO root] Completed event 8662c686-7791-4551-82a5-947d8e12317d (Updating prometheus deployment (+1 -> 1)) in 5 seconds
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module enable", "module": "prometheus"} v 0)
Mar  1 04:45:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Mar  1 04:45:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:10 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:11 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 61 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=60) [2]/[0] async=[2] r=0 lpr=60 pi=[52,60)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr module enable", "module": "prometheus"}]: dispatch
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map respawning because set of enabled modules changed!
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  e: '/usr/bin/ceph-mgr'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  0: '/usr/bin/ceph-mgr'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  1: '-n'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  2: 'mgr.compute-0.ebwufc'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  3: '-f'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  4: '--setuser'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  5: 'ceph'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  6: '--setgroup'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  7: 'ceph'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  8: '--default-log-to-file=false'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  9: '--default-log-to-journald=true'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  10: '--default-log-to-stderr=false'
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn respawning with exe /usr/bin/ceph-mgr
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr respawn  exe_path /proc/self/exe
Mar  1 04:45:11 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e29: compute-0.ebwufc(active, since 79s), standbys: compute-1.uyojxx, compute-2.dikzlj
Mar  1 04:45:11 np0005634532 systemd[1]: session-35.scope: Deactivated successfully.
Mar  1 04:45:11 np0005634532 systemd[1]: session-35.scope: Consumed 44.354s CPU time.
Mar  1 04:45:11 np0005634532 systemd-logind[832]: Session 35 logged out. Waiting for processes to exit.
Mar  1 04:45:11 np0005634532 systemd-logind[832]: Removed session 35.
Mar  1 04:45:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setuser ceph since I am not root
Mar  1 04:45:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ignoring --setgroup ceph since I am not root
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: ceph version 19.2.3 (c92aebb279828e9c3c1f5d24613efca272649e62) squid (stable), process ceph-mgr, pid 2
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: pidfile_write: ignore empty --pid-file
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'alerts'
Mar  1 04:45:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:11.787+0000 7fe177e53140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'balancer'
Mar  1 04:45:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:11.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:11.900+0000 7fe177e53140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Mar  1 04:45:11 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'cephadm'
Mar  1 04:45:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Mar  1 04:45:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Mar  1 04:45:12 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.004619598s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.894287109s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.010976791s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900695801s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.17( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.004553795s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.894287109s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.3( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.010920525s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900695801s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.010147095s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900329590s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.010069847s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900329590s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.010060310s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900329590s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009912491s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900329590s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009942055s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900634766s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.7( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=6 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009891510s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900634766s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009490967s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900329590s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009462357s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900329590s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009216309s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900390625s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009127617s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900390625s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009146690s) [2] async=[2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 186.900512695s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 62 pg[9.13( v 42'1010 (0'0,42'1010] local-lis/les=60/61 n=5 ec=52/35 lis/c=60/52 les/c/f=61/53/0 sis=62 pruub=15.009111404s) [2] r=-1 lpr=62 pi=[52,62)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 186.900512695s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:12.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.0 scrub starts
Mar  1 04:45:12 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.0 scrub ok
Mar  1 04:45:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:12 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:12 np0005634532 ceph-mon[75825]: from='mgr.14463 192.168.122.100:0/2592074844' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "mgr module enable", "module": "prometheus"}]': finished
Mar  1 04:45:12 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'crash'
Mar  1 04:45:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:12.679+0000 7fe177e53140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:45:12 np0005634532 ceph-mgr[76134]: mgr[py] Module crash has missing NOTIFY_TYPES member
Mar  1 04:45:12 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'dashboard'
Mar  1 04:45:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:12 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:13 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Mar  1 04:45:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Mar  1 04:45:13 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'devicehealth'
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:13.305+0000 7fe177e53140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'diskprediction_local'
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]:  from numpy import show_config as show_numpy_config
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:13.448+0000 7fe177e53140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'influx'
Mar  1 04:45:13 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.c scrub starts
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:13.525+0000 7fe177e53140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Module influx has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'insights'
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'iostat'
Mar  1 04:45:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:13.672+0000 7fe177e53140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Mar  1 04:45:13 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'k8sevents'
Mar  1 04:45:13 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.c scrub ok
Mar  1 04:45:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:13.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'localpool'
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mds_autoscaler'
Mar  1 04:45:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:14.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'mirroring'
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'nfs'
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:14 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:14 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.b scrub starts
Mar  1 04:45:14 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.b scrub ok
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:14.611+0000 7fe177e53140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'orchestrator'
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:14.797+0000 7fe177e53140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_perf_query'
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:14 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:14.861+0000 7fe177e53140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'osd_support'
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:14.917+0000 7fe177e53140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'pg_autoscaler'
Mar  1 04:45:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:14.985+0000 7fe177e53140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Mar  1 04:45:14 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'progress'
Mar  1 04:45:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:15.046+0000 7fe177e53140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Module progress has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'prometheus'
Mar  1 04:45:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:15 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e63 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:15.338+0000 7fe177e53140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rbd_support'
Mar  1 04:45:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:15.425+0000 7fe177e53140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'restful'
Mar  1 04:45:15 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Mar  1 04:45:15 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rgw'
Mar  1 04:45:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:15.815+0000 7fe177e53140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Mar  1 04:45:15 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'rook'
Mar  1 04:45:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:15.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:16.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.326+0000 7fe177e53140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module rook has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'selftest'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.390+0000 7fe177e53140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'snap_schedule'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.465+0000 7fe177e53140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'stats'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 14 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:16 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.d scrub starts
Mar  1 04:45:16 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.d scrub ok
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'status'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.599+0000 7fe177e53140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module status has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telegraf'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.660+0000 7fe177e53140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'telemetry'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:16.828+0000 7fe177e53140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Mar  1 04:45:16 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'test_orchestrator'
Mar  1 04:45:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 26 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.037+0000 7fe177e53140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'volumes'
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:17 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 26 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx restarted
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-1.uyojxx started
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj restarted
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : Standby manager daemon compute-2.dikzlj started
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.283+0000 7fe177e53140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr[py] Loading python module 'zabbix'
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.347+0000 7fe177e53140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Active manager daemon compute-0.ebwufc restarted
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.ebwufc
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: ms_deliver_dispatch: unhandled message 0x563635e85860 mon_map magic: 0 from mon.0 v2:192.168.122.100:3300/0
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map Activating!
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr handle_mgr_map I am now activating
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e30: compute-0.ebwufc(active, starting, since 0.0397256s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-1"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-1"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata", "id": "compute-2"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata", "id": "compute-2"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.qvzeqa"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.qvzeqa"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 all = 0
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-1.okjbfn"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-1.okjbfn"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 all = 0
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-2.gumopp"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-2.gumopp"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 all = 0
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-0.ebwufc", "id": "compute-0.ebwufc"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-2.dikzlj", "id": "compute-2.dikzlj"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr metadata", "who": "compute-1.uyojxx", "id": "compute-1.uyojxx"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds metadata"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mds metadata"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).mds e9 all = 1
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon metadata"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mon metadata"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: balancer
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Manager daemon compute-0.ebwufc is now available
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:45:17
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: cephadm
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: crash
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: dashboard
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: devicehealth
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: iostat
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO access_control] Loading user roles DB version=2
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: nfs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO sso] Loading SSO DB version=1
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO root] server: ssl=no host=192.168.122.100 port=8443
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO root] Configured CherryPy, starting engine...
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: orchestrator
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: pg_autoscaler
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: progress
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [progress INFO root] Loading...
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded [<progress.module.GhostEvent object at 0x7fe0fd205820>, <progress.module.GhostEvent object at 0x7fe0fd205850>, <progress.module.GhostEvent object at 0x7fe0fd205880>, <progress.module.GhostEvent object at 0x7fe0fd2058b0>, <progress.module.GhostEvent object at 0x7fe0fd2058e0>, <progress.module.GhostEvent object at 0x7fe0fd205910>, <progress.module.GhostEvent object at 0x7fe0fd205940>, <progress.module.GhostEvent object at 0x7fe0fd205970>, <progress.module.GhostEvent object at 0x7fe0fd2059a0>, <progress.module.GhostEvent object at 0x7fe0fd2059d0>, <progress.module.GhostEvent object at 0x7fe0fd205a00>, <progress.module.GhostEvent object at 0x7fe0fd205a30>, <progress.module.GhostEvent object at 0x7fe0fd205a60>, <progress.module.GhostEvent object at 0x7fe0fd205a90>, <progress.module.GhostEvent object at 0x7fe0fd205ac0>, <progress.module.GhostEvent object at 0x7fe0fd205af0>, <progress.module.GhostEvent object at 0x7fe0fd205b20>, <progress.module.GhostEvent object at 0x7fe0fd205b50>, <progress.module.GhostEvent object at 0x7fe0fd205b80>, <progress.module.GhostEvent object at 0x7fe0fd205bb0>, <progress.module.GhostEvent object at 0x7fe0fd205be0>, <progress.module.GhostEvent object at 0x7fe0fd205c10>, <progress.module.GhostEvent object at 0x7fe0fd205c40>] historic events
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [progress INFO root] Loaded OSDMap, ready.
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: prometheus
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO root] server_addr: :: server_port: 9283
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO root] Cache enabled
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO root] starting metric collection thread
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.e scrub starts
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO root] Starting engine...
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:17] ENGINE Bus STARTING
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] recovery thread starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] starting setup
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:17] ENGINE Bus STARTING
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: CherryPy Checker:
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: The Application mounted at '' has an empty config.
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: rbd_support
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: restful
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.e scrub ok
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: status
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [restful INFO root] server_addr: :: server_port: 8003
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [restful WARNING root] server not running: no certificate configured
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: telemetry
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] PerfHandler: starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: vms, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: volumes, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: backups, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_task_task: images, start_after=
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"} v 0)
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TaskHandler: starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: Active manager daemon compute-0.ebwufc restarted
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: Activating manager daemon compute-0.ebwufc
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: Manager daemon compute-0.ebwufc is now available
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/mirror_snapshot_schedule"}]: dispatch
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] setup complete
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [volumes INFO volumes.fs.async_job] queuing job for volume 'cephfs'
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: mgr load Constructed class from module: volumes
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:17] ENGINE Serving on http://:::9283
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.656+0000 7fe0dbd96640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:17] ENGINE Serving on http://:::9283
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:17] ENGINE Bus STARTED
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:17] ENGINE Bus STARTED
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [prometheus INFO root] Engine started.
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.660+0000 7fe0df59d640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.660+0000 7fe0df59d640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.660+0000 7fe0df59d640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.660+0000 7fe0df59d640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T09:45:17.660+0000 7fe0df59d640 -1 client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: client.0 error registering admin socket command: (17) File exists
Mar  1 04:45:17 np0005634532 systemd-logind[832]: New session 37 of user ceph-admin.
Mar  1 04:45:17 np0005634532 systemd[1]: Started Session 37 of User ceph-admin.
Mar  1 04:45:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:17.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFS -> /api/cephfs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsUi -> /ui-api/cephfs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolume -> /api/cephfs/subvolume
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeGroups -> /api/cephfs/subvolume/group
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSubvolumeSnapshots -> /api/cephfs/subvolume/snapshot
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFsSnapshotClone -> /api/cephfs/subvolume/snapshot/clone
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CephFSSnapshotSchedule -> /api/cephfs/snapshot/schedule
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiUi -> /ui-api/iscsi
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Iscsi -> /api/iscsi
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: IscsiTarget -> /api/iscsi/target
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaCluster -> /api/nfs-ganesha/cluster
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaExports -> /api/nfs-ganesha/export
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NFSGaneshaUi -> /ui-api/nfs-ganesha
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Orchestrator -> /ui-api/orchestrator
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Service -> /api/service
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroring -> /api/block/mirroring
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringSummary -> /api/block/mirroring/summary
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolMode -> /api/block/mirroring/pool
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolBootstrap -> /api/block/mirroring/pool/{pool_name}/bootstrap
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringPoolPeer -> /api/block/mirroring/pool/{pool_name}/peer
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirroringStatus -> /ui-api/block/mirroring
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Pool -> /api/pool
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PoolUi -> /ui-api/pool
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RBDPool -> /api/pool
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rbd -> /api/block/image
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdStatus -> /ui-api/block/rbd
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdSnapshot -> /api/block/image/{image_spec}/snap
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdTrash -> /api/block/image/trash
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdNamespace -> /api/block/pool/{pool_name}/namespace
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Rgw -> /ui-api/rgw
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteStatus -> /ui-api/rgw/multisite
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwMultisiteController -> /api/rgw/multisite
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwDaemon -> /api/rgw/daemon
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwSite -> /api/rgw/site
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucket -> /api/rgw/bucket
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwBucketUi -> /ui-api/rgw/bucket
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwUser -> /api/rgw/user
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClass -> /api/rgw/roles
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: rgwroles_CRUDClassMetadata -> /ui-api/rgw/roles
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwRealm -> /api/rgw/realm
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZonegroup -> /api/rgw/zonegroup
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwZone -> /api/rgw/zone
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Auth -> /api/auth
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClass -> /api/cluster/user
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: clusteruser_CRUDClassMetadata -> /ui-api/cluster/user
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Cluster -> /api/cluster
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterUpgrade -> /api/cluster/upgrade
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ClusterConfiguration -> /api/cluster_conf
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRule -> /api/crush_rule
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: CrushRuleUi -> /ui-api/crush_rule
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Daemon -> /api/daemon
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Docs -> /docs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfile -> /api/erasure_code_profile
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: ErasureCodeProfileUi -> /ui-api/erasure_code_profile
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackController -> /api/feedback
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackApiController -> /api/feedback/api_key
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeedbackUiController -> /ui-api/feedback/api_key
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FrontendLogging -> /ui-api/logging
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Grafana -> /api/grafana
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Host -> /api/host
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HostUi -> /ui-api/host
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Health -> /api/health
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: HomeController -> /
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LangsController -> /ui-api/langs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: LoginController -> /ui-api/login
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Logs -> /api/logs
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrModules -> /api/mgr/module
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Monitor -> /api/monitor
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFGateway -> /api/nvmeof/gateway
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSpdk -> /api/nvmeof/spdk
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFSubsystem -> /api/nvmeof/subsystem
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFListener -> /api/nvmeof/subsystem/{nqn}/listener
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFNamespace -> /api/nvmeof/subsystem/{nqn}/namespace
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFHost -> /api/nvmeof/subsystem/{nqn}/host
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFConnection -> /api/nvmeof/subsystem/{nqn}/connection
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: NVMeoFTcpUI -> /ui-api/nvmeof
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Osd -> /api/osd
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdUi -> /ui-api/osd
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdFlagsController -> /api/osd/flags
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MdsPerfCounter -> /api/perf_counters/mds
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MonPerfCounter -> /api/perf_counters/mon
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: OsdPerfCounter -> /api/perf_counters/osd
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RgwPerfCounter -> /api/perf_counters/rgw
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: RbdMirrorPerfCounter -> /api/perf_counters/rbd-mirror
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MgrPerfCounter -> /api/perf_counters/mgr
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: TcmuRunnerPerfCounter -> /api/perf_counters/tcmu-runner
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PerfCounters -> /api/perf_counters
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusReceiver -> /api/prometheus_receiver
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Prometheus -> /api/prometheus
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusNotifications -> /api/prometheus/notifications
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: PrometheusSettings -> /ui-api/prometheus
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Role -> /api/role
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Scope -> /ui-api/scope
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Saml2 -> /auth/saml2
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Settings -> /api/settings
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: StandardSettings -> /ui-api/standard_settings
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Summary -> /api/summary
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Task -> /api/task
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: Telemetry -> /api/telemetry
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: User -> /api/user
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserPasswordPolicy -> /api/user
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: UserChangePassword -> /api/user/{username}
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: FeatureTogglesEndpoint -> /api/feature_toggles
Mar  1 04:45:17 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.controllers._base_controller] Initializing controller: MessageOfTheDay -> /ui-api/motd
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [dashboard INFO dashboard.module] Engine started.
Mar  1 04:45:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:18.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:18 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e31: compute-0.ebwufc(active, since 1.05277s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v3: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:18 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:18 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Mar  1 04:45:18 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Mar  1 04:45:18 np0005634532 podman[102376]: 2026-03-01 09:45:18.622945457 +0000 UTC m=+0.066101335 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:45:18 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.ebwufc/trash_purge_schedule"}]: dispatch
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:45:18] ENGINE Bus STARTING
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:45:18] ENGINE Bus STARTING
Mar  1 04:45:18 np0005634532 podman[102376]: 2026-03-01 09:45:18.743359628 +0000 UTC m=+0.186515506 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:45:18] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:45:18] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:45:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:18 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504002f00 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:45:18] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:45:18] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:45:18] ENGINE Bus STARTED
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:45:18] ENGINE Bus STARTED
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: [cephadm INFO cherrypy.error] [01/Mar/2026:09:45:18] ENGINE Client ('192.168.122.100', 45046) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:45:18 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : [01/Mar/2026:09:45:18] ENGINE Client ('192.168.122.100', 45046) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:45:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:19 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:19 np0005634532 podman[102539]: 2026-03-01 09:45:19.290201042 +0000 UTC m=+0.068420433 container exec e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:19 np0005634532 podman[102539]: 2026-03-01 09:45:19.303497737 +0000 UTC m=+0.081717088 container exec_died e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v4: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0)
Mar  1 04:45:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Mar  1 04:45:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:19 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:45:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:19 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:45:19 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 04:45:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1 deep-scrub starts
Mar  1 04:45:19 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1 deep-scrub ok
Mar  1 04:45:19 np0005634532 podman[102625]: 2026-03-01 09:45:19.585989557 +0000 UTC m=+0.064147476 container exec ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:19 np0005634532 podman[102625]: 2026-03-01 09:45:19.624407734 +0000 UTC m=+0.102565603 container exec_died ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:19 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e32: compute-0.ebwufc(active, since 2s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:45:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Mar  1 04:45:19 np0005634532 podman[102689]: 2026-03-01 09:45:19.829109636 +0000 UTC m=+0.055982550 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:45:19 np0005634532 podman[102689]: 2026-03-01 09:45:19.842452522 +0000 UTC m=+0.069325416 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:45:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:19.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:20 np0005634532 podman[102755]: 2026-03-01 09:45:20.009328173 +0000 UTC m=+0.047554708 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, distribution-scope=public, io.openshift.expose-services=, name=keepalived, release=1793, architecture=x86_64, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, description=keepalived for Ceph, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Mar  1 04:45:20 np0005634532 podman[102755]: 2026-03-01 09:45:20.021202872 +0000 UTC m=+0.059429377 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=keepalived, architecture=x86_64, vendor=Red Hat, Inc., build-date=2023-02-22T09:23:20, io.buildah.version=1.28.2, vcs-type=git, distribution-scope=public, com.redhat.component=keepalived-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, description=keepalived for Ceph, io.openshift.tags=Ceph keepalived, version=2.2.4, io.openshift.expose-services=, io.k8s.display-name=Keepalived on RHEL 9, release=1793, summary=Provides keepalived on RHEL 9 for Ceph.)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e64 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:20 np0005634532 podman[102819]: 2026-03-01 09:45:20.225037472 +0000 UTC m=+0.057890348 container exec 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:20.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:20 np0005634532 podman[102819]: 2026-03-01 09:45:20.257282424 +0000 UTC m=+0.090135210 container exec_died 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.563079834s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.735015869s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.562994957s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.735015869s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.562684059s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.735916138s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.562626839s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.735916138s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.5( v 55'1013 (0'0,55'1013] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.562404633s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=53'1011 lcod 55'1012 mlcod 55'1012 active pruub 189.736236572s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.5( v 55'1013 (0'0,55'1013] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.562209129s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=53'1011 lcod 55'1012 mlcod 0'0 unknown NOTIFY pruub 189.736236572s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.566472054s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.740966797s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 65 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=65 pruub=9.566415787s) [2] r=-1 lpr=65 pi=[52,65)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.740966797s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:20 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002520 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:20 np0005634532 podman[102895]: 2026-03-01 09:45:20.518251301 +0000 UTC m=+0.071862669 container exec b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Mar  1 04:45:20 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Mar  1 04:45:20 np0005634532 podman[102895]: 2026-03-01 09:45:20.692894497 +0000 UTC m=+0.246505805 container exec_died b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:45:18] ENGINE Bus STARTING
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:45:18] ENGINE Serving on http://192.168.122.100:8765
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:45:18] ENGINE Serving on https://192.168.122.100:7150
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:45:18] ENGINE Bus STARTED
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: [01/Mar/2026:09:45:18] ENGINE Client ('192.168.122.100', 45046) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Mar  1 04:45:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:20 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:21 np0005634532 podman[103009]: 2026-03-01 09:45:21.11333128 +0000 UTC m=+0.060205547 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:21 np0005634532 podman[103009]: 2026-03-01 09:45:21.155746787 +0000 UTC m=+0.102621094 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v6: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481793404s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.735397339s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481751442s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.735397339s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481777191s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.735595703s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481553078s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.735595703s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.5( v 55'1013 (0'0,55'1013] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=53'1011 lcod 55'1012 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481234550s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.736007690s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.5( v 55'1013 (0'0,55'1013] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=53'1011 lcod 55'1012 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.481213570s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.736007690s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.484675407s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 189.739852905s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66 pruub=8.484655380s) [1] r=-1 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 189.739852905s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 66 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.0 scrub starts
Mar  1 04:45:21 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.0 scrub ok
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:21.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:22.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e33: compute-0.ebwufc(active, since 4s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.366024) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322366082, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 901, "num_deletes": 251, "total_data_size": 2823597, "memory_usage": 2927872, "flush_reason": "Manual Compaction"}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322383279, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 2742340, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7115, "largest_seqno": 8015, "table_properties": {"data_size": 2737465, "index_size": 2267, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 13044, "raw_average_key_size": 21, "raw_value_size": 2726692, "raw_average_value_size": 4469, "num_data_blocks": 99, "num_entries": 610, "num_filter_entries": 610, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358305, "oldest_key_time": 1772358305, "file_creation_time": 1772358322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 17310 microseconds, and 8337 cpu microseconds.
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.383336) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 2742340 bytes OK
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.383363) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.384502) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.384521) EVENT_LOG_v1 {"time_micros": 1772358322384514, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.384542) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2818659, prev total WAL file size 2818659, number of live WAL files 2.
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.385310) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(2678KB)], [20(11MB)]
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322385345, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 15174235, "oldest_snapshot_seqno": -1}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3150 keys, 13832706 bytes, temperature: kUnknown
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322442076, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 13832706, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13807582, "index_size": 16119, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7941, "raw_key_size": 81537, "raw_average_key_size": 25, "raw_value_size": 13745435, "raw_average_value_size": 4363, "num_data_blocks": 704, "num_entries": 3150, "num_filter_entries": 3150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772358322, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.442406) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 13832706 bytes
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.444008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 267.0 rd, 243.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 11.9 +0.0 blob) out(13.2 +0.0 blob), read-write-amplify(10.6) write-amplify(5.0) OK, records in: 3682, records dropped: 532 output_compression: NoCompression
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.444029) EVENT_LOG_v1 {"time_micros": 1772358322444018, "job": 6, "event": "compaction_finished", "compaction_time_micros": 56828, "compaction_time_cpu_micros": 26842, "output_level": 6, "num_output_files": 1, "total_output_size": 13832706, "num_input_records": 3682, "num_output_records": 3150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322444583, "job": 6, "event": "table_file_deletion", "file_number": 22}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358322445810, "job": 6, "event": "table_file_deletion", "file_number": 20}
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.385213) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.445835) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.445840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.445842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.445844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:45:22.445845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Mar  1 04:45:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.5( v 55'1013 (0'0,55'1013] local-lis/les=66/67 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=55'1013 lcod 55'1012 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:22 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 67 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=66) [2]/[0] async=[2] r=0 lpr=66 pi=[52,66)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:22 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:23 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0035d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v9: 353 pgs: 353 active+clean; 457 KiB data, 107 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0)
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=4 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.005213737s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 198.282302856s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.15( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=4 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.005070686s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.282302856s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=6 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.004530907s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 198.282424927s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=6 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.004466057s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.282424927s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.5( v 67'1017 (0'0,67'1017] local-lis/les=66/67 n=6 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.003807068s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=55'1013 lcod 67'1016 mlcod 67'1016 active pruub 198.282241821s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.5( v 67'1017 (0'0,67'1017] local-lis/les=66/67 n=6 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=15.003687859s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=55'1013 lcod 67'1016 mlcod 0'0 unknown NOTIFY pruub 198.282241821s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=5 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=14.997771263s) [2] async=[2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 198.276535034s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.1d( v 42'1010 (0'0,42'1010] local-lis/les=66/67 n=5 ec=52/35 lis/c=66/52 les/c/f=67/53/0 sis=68 pruub=14.997735977s) [2] r=-1 lpr=68 pi=[52,68)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 198.276535034s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:23 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 68 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=67) [1]/[0] async=[1] r=0 lpr=67 pi=[52,67)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 systemd-logind[832]: New session 38 of user zuul.
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.conf
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-2:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 ceph-mon[75825]: Updating compute-1:/etc/ceph/ceph.client.admin.keyring
Mar  1 04:45:23 np0005634532 systemd[1]: Started Session 38 of User zuul.
Mar  1 04:45:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:23.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:24.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:24 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989885330s) [1] async=[1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 199.286163330s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.985989571s) [1] async=[1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 199.282287598s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989858627s) [1] async=[1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 199.286178589s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.6( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989744186s) [1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.286163330s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=6 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989732742s) [1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.286178589s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989312172s) [1] async=[1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 199.286148071s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.985864639s) [1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.282287598s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:24 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 69 pg[9.16( v 42'1010 (0'0,42'1010] local-lis/les=67/68 n=5 ec=52/35 lis/c=67/52 les/c/f=68/53/0 sis=69 pruub=14.989121437s) [1] r=-1 lpr=69 pi=[52,69)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 199.286148071s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 python3.9[104129]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: Updating compute-2:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: Updating compute-0:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: Updating compute-1:/var/lib/ceph/437b1e74-f995-5d64-af1d-257ce01d77ab/config/ceph.client.admin.keyring
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:24 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:25 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528001110 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e69 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v12: 353 pgs: 4 remapped+peering, 4 active+remapped, 345 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 0 B/s wr, 17 op/s; 148 B/s, 6 objects/s recovering
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.606109714 +0000 UTC m=+0.042134812 container create 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:45:25 np0005634532 systemd[1]: Started libpod-conmon-83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9.scope.
Mar  1 04:45:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.585230958 +0000 UTC m=+0.021255996 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.681325567 +0000 UTC m=+0.117350605 container init 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.687743489 +0000 UTC m=+0.123768517 container start 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.691553574 +0000 UTC m=+0.127578572 container attach 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:45:25 np0005634532 naughty_jepsen[104447]: 167 167
Mar  1 04:45:25 np0005634532 systemd[1]: libpod-83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9.scope: Deactivated successfully.
Mar  1 04:45:25 np0005634532 conmon[104447]: conmon 83f81043ececf5da85a6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9.scope/container/memory.events
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.694704404 +0000 UTC m=+0.130729432 container died 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:25 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c4c808df353c7c8cc1fee3dd580518bcca71cabd6ea1ff8a7ee5e91b1733f512-merged.mount: Deactivated successfully.
Mar  1 04:45:25 np0005634532 podman[104407]: 2026-03-01 09:45:25.743092382 +0000 UTC m=+0.179117410 container remove 83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_jepsen, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:45:25 np0005634532 systemd[1]: libpod-conmon-83f81043ececf5da85a6269b3acbdb37d777576019b10bb127612be70e74a8d9.scope: Deactivated successfully.
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:45:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:25.862 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:25 np0005634532 podman[104523]: 2026-03-01 09:45:25.89080244 +0000 UTC m=+0.053381725 container create ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:25 np0005634532 systemd[1]: Started libpod-conmon-ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73.scope.
Mar  1 04:45:25 np0005634532 podman[104523]: 2026-03-01 09:45:25.864764554 +0000 UTC m=+0.027343889 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:26 np0005634532 podman[104523]: 2026-03-01 09:45:26.008153483 +0000 UTC m=+0.170732818 container init ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:45:26 np0005634532 podman[104523]: 2026-03-01 09:45:26.025823248 +0000 UTC m=+0.188402523 container start ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 04:45:26 np0005634532 podman[104523]: 2026-03-01 09:45:26.031429989 +0000 UTC m=+0.194009264 container attach ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:45:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:26.259 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:45:26 np0005634532 objective_solomon[104539]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:45:26 np0005634532 objective_solomon[104539]: --> All data devices are unavailable
Mar  1 04:45:26 np0005634532 python3.9[104621]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:45:26 np0005634532 systemd[1]: libpod-ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73.scope: Deactivated successfully.
Mar  1 04:45:26 np0005634532 podman[104523]: 2026-03-01 09:45:26.413608839 +0000 UTC m=+0.576188104 container died ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 04:45:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-23cd31845f808162e6416d4f1422ebdd0f509d423cd5e44b6e926fccabdaf685-merged.mount: Deactivated successfully.
Mar  1 04:45:26 np0005634532 podman[104523]: 2026-03-01 09:45:26.45657847 +0000 UTC m=+0.619157725 container remove ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_solomon, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:26 np0005634532 systemd[1]: libpod-conmon-ebe716f59d4860b5f68bde1b4df5778cc30f3c583e1dec5c97a66c25849c8a73.scope: Deactivated successfully.
Mar  1 04:45:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:26 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:26 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:26 np0005634532 podman[104746]: 2026-03-01 09:45:26.968124906 +0000 UTC m=+0.044541122 container create f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 04:45:27 np0005634532 systemd[1]: Started libpod-conmon-f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56.scope.
Mar  1 04:45:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:26.945049045 +0000 UTC m=+0.021465251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:27.047446793 +0000 UTC m=+0.123862999 container init f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:27.05529228 +0000 UTC m=+0.131708456 container start f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:27.059452965 +0000 UTC m=+0.135869141 container attach f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:27] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Mar  1 04:45:27 np0005634532 amazing_torvalds[104763]: 167 167
Mar  1 04:45:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:27] "GET /metrics HTTP/1.1" 200 46584 "" "Prometheus/2.51.0"
Mar  1 04:45:27 np0005634532 systemd[1]: libpod-f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56.scope: Deactivated successfully.
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:27.061737593 +0000 UTC m=+0.138153769 container died f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-95507f4f32e54ab4319a64092276c635e699d45381256159cf88e589a78d0401-merged.mount: Deactivated successfully.
Mar  1 04:45:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:27 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:27 np0005634532 podman[104746]: 2026-03-01 09:45:27.102288523 +0000 UTC m=+0.178704699 container remove f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_torvalds, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:45:27 np0005634532 systemd[1]: libpod-conmon-f326d9218a9d7bb4b45c74c8af00de48d6faa413ec5e72b82c3df556cbeadb56.scope: Deactivated successfully.
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.250422772 +0000 UTC m=+0.038736176 container create b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:27 np0005634532 systemd[1]: Started libpod-conmon-b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7.scope.
Mar  1 04:45:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d4234a650cb977968102e34fca0539335c5853c3f7e74ad969ad3d5fc11d49/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d4234a650cb977968102e34fca0539335c5853c3f7e74ad969ad3d5fc11d49/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d4234a650cb977968102e34fca0539335c5853c3f7e74ad969ad3d5fc11d49/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6d4234a650cb977968102e34fca0539335c5853c3f7e74ad969ad3d5fc11d49/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.328131578 +0000 UTC m=+0.116445012 container init b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.232138031 +0000 UTC m=+0.020451485 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.3409178 +0000 UTC m=+0.129231214 container start b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.345096605 +0000 UTC m=+0.133410109 container attach b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:45:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v14: 353 pgs: 4 remapped+peering, 4 active+remapped, 345 active+clean; 456 KiB data, 107 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 14 op/s; 121 B/s, 5 objects/s recovering
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]: {
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:    "0": [
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:        {
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "devices": [
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "/dev/loop3"
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            ],
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "lv_name": "ceph_lv0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "lv_size": "21470642176",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "name": "ceph_lv0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "tags": {
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.cluster_name": "ceph",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.crush_device_class": "",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.encrypted": "0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.osd_id": "0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.type": "block",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.vdo": "0",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:                "ceph.with_tpm": "0"
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            },
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "type": "block",
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:            "vg_name": "ceph_vg0"
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:        }
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]:    ]
Mar  1 04:45:27 np0005634532 clever_bhaskara[104802]: }
Mar  1 04:45:27 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Mar  1 04:45:27 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Mar  1 04:45:27 np0005634532 systemd[1]: libpod-b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7.scope: Deactivated successfully.
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.622102416 +0000 UTC m=+0.410415840 container died b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:45:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b6d4234a650cb977968102e34fca0539335c5853c3f7e74ad969ad3d5fc11d49-merged.mount: Deactivated successfully.
Mar  1 04:45:27 np0005634532 podman[104785]: 2026-03-01 09:45:27.660211405 +0000 UTC m=+0.448524839 container remove b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=clever_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:27 np0005634532 systemd[1]: libpod-conmon-b114c4c90fa933b0ec908cd09ade3661520450ad47538204965c0cbb7b8ee5d7.scope: Deactivated successfully.
Mar  1 04:45:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:27.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.225094924 +0000 UTC m=+0.054273597 container create 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:28.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:28 np0005634532 systemd[1]: Started libpod-conmon-87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d.scope.
Mar  1 04:45:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.201699845 +0000 UTC m=+0.030878448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.308863582 +0000 UTC m=+0.138042165 container init 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.315565741 +0000 UTC m=+0.144744294 container start 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.318821923 +0000 UTC m=+0.148000476 container attach 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:45:28 np0005634532 boring_shockley[104933]: 167 167
Mar  1 04:45:28 np0005634532 systemd[1]: libpod-87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d.scope: Deactivated successfully.
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.320560876 +0000 UTC m=+0.149739469 container died 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:45:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a3b65dd4c6f3bd7f00ac706a7f908891b72bfeebce25bec1e74b8d6b949d3bb8-merged.mount: Deactivated successfully.
Mar  1 04:45:28 np0005634532 podman[104916]: 2026-03-01 09:45:28.366148774 +0000 UTC m=+0.195327327 container remove 87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 04:45:28 np0005634532 systemd[1]: libpod-conmon-87d5f5153301eacc6dd1046dd937c24113aa48b55fb2efd3c8b40b80c1ad471d.scope: Deactivated successfully.
Mar  1 04:45:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:28 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528001110 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:28 np0005634532 podman[104957]: 2026-03-01 09:45:28.567197544 +0000 UTC m=+0.062275248 container create 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:28 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Mar  1 04:45:28 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Mar  1 04:45:28 np0005634532 podman[104957]: 2026-03-01 09:45:28.541268682 +0000 UTC m=+0.036346396 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:28 np0005634532 systemd[1]: Started libpod-conmon-26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b.scope.
Mar  1 04:45:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f8737acf21ab942d77fcb007f19fc92e9a8c7caf2a22a2d79aa36495cfe611/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f8737acf21ab942d77fcb007f19fc92e9a8c7caf2a22a2d79aa36495cfe611/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f8737acf21ab942d77fcb007f19fc92e9a8c7caf2a22a2d79aa36495cfe611/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/29f8737acf21ab942d77fcb007f19fc92e9a8c7caf2a22a2d79aa36495cfe611/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:28 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:28 np0005634532 podman[104957]: 2026-03-01 09:45:28.861898312 +0000 UTC m=+0.356975986 container init 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:45:28 np0005634532 podman[104957]: 2026-03-01 09:45:28.86894819 +0000 UTC m=+0.364025904 container start 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:28 np0005634532 podman[104957]: 2026-03-01 09:45:28.879379702 +0000 UTC m=+0.374457406 container attach 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:45:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:29 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v15: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 80 B/s, 4 objects/s recovering
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Mar  1 04:45:29 np0005634532 lvm[105048]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:45:29 np0005634532 lvm[105048]: VG ceph_vg0 finished
Mar  1 04:45:29 np0005634532 epic_morse[104973]: {}
Mar  1 04:45:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094529 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:45:29 np0005634532 systemd[1]: libpod-26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b.scope: Deactivated successfully.
Mar  1 04:45:29 np0005634532 podman[104957]: 2026-03-01 09:45:29.566819365 +0000 UTC m=+1.061897029 container died 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:29 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Mar  1 04:45:29 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Mar  1 04:45:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-29f8737acf21ab942d77fcb007f19fc92e9a8c7caf2a22a2d79aa36495cfe611-merged.mount: Deactivated successfully.
Mar  1 04:45:29 np0005634532 podman[104957]: 2026-03-01 09:45:29.612565467 +0000 UTC m=+1.107643131 container remove 26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:29 np0005634532 systemd[1]: libpod-conmon-26768e8750f67d33fa1f680b65eda23aec0793a6f8d5b73a5d2186f557d4201b.scope: Deactivated successfully.
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.prometheus}] v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Mar  1 04:45:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:29.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:29 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (monmap changed)...
Mar  1 04:45:29 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (monmap changed)...
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:29 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:45:29 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Mar  1 04:45:29 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Mar  1 04:45:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:30.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.306336579 +0000 UTC m=+0.026407566 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:45:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:30 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Mar  1 04:45:30 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Mar  1 04:45:30 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.84273826 +0000 UTC m=+0.562809197 container create 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:45:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:30 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45280022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:30 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:30 np0005634532 systemd[1]: Started libpod-conmon-8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac.scope.
Mar  1 04:45:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.930317755 +0000 UTC m=+0.650388732 container init 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.938944682 +0000 UTC m=+0.659015589 container start 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:30 np0005634532 jovial_vaughan[105204]: 167 167
Mar  1 04:45:30 np0005634532 systemd[1]: libpod-8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac.scope: Deactivated successfully.
Mar  1 04:45:30 np0005634532 conmon[105204]: conmon 8e6f6625c80194c6b9ce <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac.scope/container/memory.events
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.943498867 +0000 UTC m=+0.663569824 container attach 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.944658206 +0000 UTC m=+0.664729123 container died 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:45:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d2d162c002d29855ede817e67fb2330e61f1cc5ad78bb5ce3c9e0fa916d1eee1-merged.mount: Deactivated successfully.
Mar  1 04:45:30 np0005634532 podman[105188]: 2026-03-01 09:45:30.994258224 +0000 UTC m=+0.714329161 container remove 8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac (image=quay.io/ceph/ceph:v19, name=jovial_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:31 np0005634532 systemd[1]: libpod-conmon-8e6f6625c80194c6b9ce7434127610a5e88190e13cb3719a8040057cc17ed2ac.scope: Deactivated successfully.
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.ebwufc (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.ebwufc (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:45:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:31 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v17: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.511832751 +0000 UTC m=+0.039659049 container create a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:31 np0005634532 systemd[1]: Started libpod-conmon-a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a.scope.
Mar  1 04:45:31 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:31 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1d scrub starts
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.579207247 +0000 UTC m=+0.107033615 container init a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.585615138 +0000 UTC m=+0.113441466 container start a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:45:31 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.1d scrub ok
Mar  1 04:45:31 np0005634532 ecstatic_elion[105307]: 167 167
Mar  1 04:45:31 np0005634532 systemd[1]: libpod-a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a.scope: Deactivated successfully.
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.495731176 +0000 UTC m=+0.023557494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph:v19
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.591856275 +0000 UTC m=+0.119682603 container attach a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.592324517 +0000 UTC m=+0.120150845 container died a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:45:31 np0005634532 systemd[1]: var-lib-containers-storage-overlay-42e341ad0446100112e7de2e4798afec3f003f081a8d168f62abfec2dfe533f7-merged.mount: Deactivated successfully.
Mar  1 04:45:31 np0005634532 podman[105290]: 2026-03-01 09:45:31.636633292 +0000 UTC m=+0.164459590 container remove a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a (image=quay.io/ceph/ceph:v19, name=ecstatic_elion, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:31 np0005634532 systemd[1]: libpod-conmon-a5f62e6999a42f8f44e7ece20a6714e0140ba4181492126e0ccdef9b40a6e52a.scope: Deactivated successfully.
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-0 (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-0 (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-0 on compute-0
Mar  1 04:45:31 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-0 on compute-0
Mar  1 04:45:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:31.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: Reconfiguring mon.compute-0 (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: Reconfiguring daemon mon.compute-0 on compute-0
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: Reconfiguring mgr.compute-0.ebwufc (monmap changed)...
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.ebwufc", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: Reconfiguring daemon mgr.compute-0.ebwufc on compute-0
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 71 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=13.880314827s) [2] r=-1 lpr=71 pi=[52,71)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.735931396s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 71 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=13.880275726s) [2] r=-1 lpr=71 pi=[52,71)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.735931396s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 71 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=13.883832932s) [2] r=-1 lpr=71 pi=[52,71)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.740112305s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 71 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=71 pruub=13.883814812s) [2] r=-1 lpr=71 pi=[52,71)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.740112305s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 72 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=13.862320900s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.735900879s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 72 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=13.862299919s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.735900879s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 72 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=13.865839005s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.740158081s@ mbc={}] PeeringState::start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 72 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=72 pruub=13.865772247s) [2] r=-1 lpr=72 pi=[52,72)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.740158081s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.210720592 +0000 UTC m=+0.057750594 container create 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:32 np0005634532 systemd[1]: Started libpod-conmon-7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076.scope.
Mar  1 04:45:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:32.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.183451456 +0000 UTC m=+0.030481498 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:32 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.296743137 +0000 UTC m=+0.143773239 container init 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.301430585 +0000 UTC m=+0.148460587 container start 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:45:32 np0005634532 peaceful_noether[105411]: 167 167
Mar  1 04:45:32 np0005634532 systemd[1]: libpod-7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076.scope: Deactivated successfully.
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.305145129 +0000 UTC m=+0.152175181 container attach 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.306116403 +0000 UTC m=+0.153146435 container died 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8f865fb75f1534f462e20e3324abf8c2d53c5e303e67c5c1fe4768b7b29b7656-merged.mount: Deactivated successfully.
Mar  1 04:45:32 np0005634532 podman[105395]: 2026-03-01 09:45:32.349610858 +0000 UTC m=+0.196640880 container remove 7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:45:32 np0005634532 systemd[1]: libpod-conmon-7b097be20cfd023d40aa697870bad186544b6cec32043ca84a0829e1489b5076.scope: Deactivated successfully.
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:32 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring osd.0 (monmap changed)...
Mar  1 04:45:32 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring osd.0 (monmap changed)...
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:32 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.0 on compute-0
Mar  1 04:45:32 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.0 on compute-0
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:45:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:32 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Mar  1 04:45:32 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Mar  1 04:45:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:32 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: Reconfiguring crash.compute-0 (monmap changed)...
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: Reconfiguring daemon crash.compute-0 on compute-0
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: Reconfiguring osd.0 (monmap changed)...
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Mar  1 04:45:32 np0005634532 ceph-mon[75825]: Reconfiguring daemon osd.0 on compute-0
Mar  1 04:45:32 np0005634532 podman[105496]: 2026-03-01 09:45:32.924480648 +0000 UTC m=+0.046766498 container create 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:45:32 np0005634532 systemd[1]: Started libpod-conmon-027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd.scope.
Mar  1 04:45:32 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:32.906179117 +0000 UTC m=+0.028464947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:33.005256121 +0000 UTC m=+0.127542021 container init 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:33.011931179 +0000 UTC m=+0.134216989 container start 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:33.015053728 +0000 UTC m=+0.137339578 container attach 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:33 np0005634532 pedantic_ellis[105513]: 167 167
Mar  1 04:45:33 np0005634532 systemd[1]: libpod-027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd.scope: Deactivated successfully.
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:33.019185322 +0000 UTC m=+0.141471182 container died 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ad496ecb8a1fb707effe0212e849d0b1f44e9dbe07020588e085ead29a242a95-merged.mount: Deactivated successfully.
Mar  1 04:45:33 np0005634532 podman[105496]: 2026-03-01 09:45:33.06522865 +0000 UTC m=+0.187514500 container remove 027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pedantic_ellis, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 04:45:33 np0005634532 systemd[1]: libpod-conmon-027a1d6391caf82ec6c3dd871635bffc89ab9beee2ba2bc5ceda6efb9e44dcbd.scope: Deactivated successfully.
Mar  1 04:45:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:33 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45280022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 73 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-0 (unknown last config time)...
Mar  1 04:45:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-0 (unknown last config time)...
Mar  1 04:45:33 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-0 on compute-0
Mar  1 04:45:33 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-0 on compute-0
Mar  1 04:45:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v20: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0)
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Mar  1 04:45:33 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Mar  1 04:45:33 np0005634532 systemd[1]: Stopping Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:33 np0005634532 podman[105646]: 2026-03-01 09:45:33.859494872 +0000 UTC m=+0.044994513 container died e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:33.873 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8dbadd39b909e929e5c214bb459e113b3e61a8db0d76ae2659f3b8514c81fa26-merged.mount: Deactivated successfully.
Mar  1 04:45:33 np0005634532 podman[105646]: 2026-03-01 09:45:33.89635451 +0000 UTC m=+0.081854131 container remove e104fed6cb8ecd593791384f3650da41ba603514e3f5be77683b4a91426bfe16 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:33 np0005634532 bash[105646]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0
Mar  1 04:45:33 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@node-exporter.compute-0.service: Main process exited, code=exited, status=143/n/a
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: Reconfiguring node-exporter.compute-0 (unknown last config time)...
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: Reconfiguring daemon node-exporter.compute-0 on compute-0
Mar  1 04:45:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Mar  1 04:45:34 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@node-exporter.compute-0.service: Failed with result 'exit-code'.
Mar  1 04:45:34 np0005634532 systemd[1]: Stopped Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:34 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@node-exporter.compute-0.service: Consumed 2.010s CPU time.
Mar  1 04:45:34 np0005634532 systemd[1]: Starting Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=11.825826645s) [1] r=-1 lpr=74 pi=[52,74)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.736145020s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=11.825787544s) [1] r=-1 lpr=74 pi=[52,74)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.736145020s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=11.825597763s) [1] r=-1 lpr=74 pi=[52,74)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 205.736541748s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=74 pruub=11.825538635s) [1] r=-1 lpr=74 pi=[52,74)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 205.736541748s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 74 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=73) [2]/[0] async=[2] r=0 lpr=73 pi=[52,73)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:34.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:34 np0005634532 systemd[1]: session-38.scope: Deactivated successfully.
Mar  1 04:45:34 np0005634532 systemd[1]: session-38.scope: Consumed 8.081s CPU time.
Mar  1 04:45:34 np0005634532 podman[105769]: 2026-03-01 09:45:34.238668666 +0000 UTC m=+0.022534148 image pull 72c9c208898624938c9e4183d6686ea4a5fd3f912bc29bc3f00147924c521a3e quay.io/prometheus/node-exporter:v1.7.0
Mar  1 04:45:34 np0005634532 systemd-logind[832]: Session 38 logged out. Waiting for processes to exit.
Mar  1 04:45:34 np0005634532 systemd-logind[832]: Removed session 38.
Mar  1 04:45:34 np0005634532 podman[105769]: 2026-03-01 09:45:34.343463494 +0000 UTC m=+0.127328906 container create 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/18e7e4a453dfa60a450006eeace16998091b8866482483ab037c791c48f1e8e2/merged/etc/node-exporter supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:34 np0005634532 podman[105769]: 2026-03-01 09:45:34.413537168 +0000 UTC m=+0.197402640 container init 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:34 np0005634532 podman[105769]: 2026-03-01 09:45:34.419465267 +0000 UTC m=+0.203330679 container start 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:34 np0005634532 bash[105769]: 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.428Z caller=node_exporter.go:192 level=info msg="Starting node_exporter" version="(version=1.7.0, branch=HEAD, revision=7333465abf9efba81876303bb57e6fadb946041b)"
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.428Z caller=node_exporter.go:193 level=info msg="Build context" build_context="(go=go1.21.4, platform=linux/amd64, user=root@35918982f6d8, date=20231112-23:53:35, tags=netgo osusergo static_build)"
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.429Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.430Z caller=diskstats_linux.go:265 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Mar  1 04:45:34 np0005634532 systemd[1]: Started Ceph node-exporter.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.431Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.431Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.432Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.432Z caller=node_exporter.go:117 level=info collector=arp
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=bcache
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=bonding
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=btrfs
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=conntrack
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=cpu
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=cpufreq
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=diskstats
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=dmi
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=edac
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=entropy
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=fibrechannel
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.433Z caller=node_exporter.go:117 level=info collector=filefd
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=filesystem
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=hwmon
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=infiniband
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=ipvs
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=loadavg
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=mdadm
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.434Z caller=node_exporter.go:117 level=info collector=meminfo
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=netclass
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=netdev
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=netstat
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=nfs
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=nfsd
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=nvme
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=os
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=powersupplyclass
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=pressure
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=rapl
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=schedstat
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=selinux
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=sockstat
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=softnet
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=stat
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=tapestats
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=textfile
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=thermal_zone
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=time
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=udp_queues
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=uname
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=vmstat
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=xfs
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.435Z caller=node_exporter.go:117 level=info collector=zfs
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.436Z caller=tls_config.go:274 level=info msg="Listening on" address=[::]:9100
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0[105784]: ts=2026-03-01T09:45:34.436Z caller=tls_config.go:277 level=info msg="TLS is disabled." http2=false address=[::]:9100
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:34 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring alertmanager.compute-0 (dependencies changed)...
Mar  1 04:45:34 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring alertmanager.compute-0 (dependencies changed)...
Mar  1 04:45:34 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon alertmanager.compute-0 on compute-0
Mar  1 04:45:34 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon alertmanager.compute-0 on compute-0
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Mar  1 04:45:34 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Mar  1 04:45:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: Reconfiguring alertmanager.compute-0 (dependencies changed)...
Mar  1 04:45:34 np0005634532 ceph-mon[75825]: Reconfiguring daemon alertmanager.compute-0 on compute-0
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.031482291 +0000 UTC m=+0.061938850 volume create b970e26c7b3ec12d3b51976f6d862c6455f18ab0772fc5b85959916cd98e15e5
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.050274054 +0000 UTC m=+0.080730623 container create f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:34.999535757 +0000 UTC m=+0.029992386 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:45:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:35 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:35 np0005634532 systemd[1]: Started libpod-conmon-f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c.scope.
Mar  1 04:45:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b569da692577cfa7ff2c1eb8f1ca50a3f0b5438339b4c7c92421ad590319839a/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Mar  1 04:45:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Mar  1 04:45:35 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.180588494 +0000 UTC m=+0.211045073 container init f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.187447456 +0000 UTC m=+0.217903985 container start f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979758263s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 209.919738770s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979728699s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 209.919754028s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979539871s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 209.919738770s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979626656s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.919754028s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.8( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979650497s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.919738770s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.977403641s) [2] async=[2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 209.917922974s@ mbc={255={}}] PeeringState::start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.9( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=6 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.977361679s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.917922974s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 75 pg[9.18( v 42'1010 (0'0,42'1010] local-lis/les=73/74 n=5 ec=52/35 lis/c=73/52 les/c/f=74/53/0 sis=75 pruub=14.979479790s) [2] r=-1 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 209.919738770s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:35 np0005634532 gallant_golick[105877]: 65534 65534
Mar  1 04:45:35 np0005634532 systemd[1]: libpod-f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c.scope: Deactivated successfully.
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.204833164 +0000 UTC m=+0.235289793 container attach f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.205656055 +0000 UTC m=+0.236112584 container died f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b569da692577cfa7ff2c1eb8f1ca50a3f0b5438339b4c7c92421ad590319839a-merged.mount: Deactivated successfully.
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.304492433 +0000 UTC m=+0.334949002 container remove f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c (image=quay.io/prometheus/alertmanager:v0.25.0, name=gallant_golick, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105860]: 2026-03-01 09:45:35.311683254 +0000 UTC m=+0.342139823 volume remove b970e26c7b3ec12d3b51976f6d862c6455f18ab0772fc5b85959916cd98e15e5
Mar  1 04:45:35 np0005634532 systemd[1]: libpod-conmon-f748aea479bcc7cb892ac8051a2dcefecbfbf1f012678642b9fad59cdf894e3c.scope: Deactivated successfully.
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.393732509 +0000 UTC m=+0.058814702 volume create 1c5bafe5e25f39e8293c9863c12a79e12350b7fcd6ac1d586451395713564bd2
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.406139931 +0000 UTC m=+0.071222134 container create c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v23: 353 pgs: 4 remapped+peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:35 np0005634532 systemd[1]: Started libpod-conmon-c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e.scope.
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.368472103 +0000 UTC m=+0.033554346 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:45:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fccbeb1bb26de71e43370dbe3c8bd1ca82b0eb8ce4964f52c2d66d4a7d568a76/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.517175076 +0000 UTC m=+0.182257249 container init c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.523502615 +0000 UTC m=+0.188584778 container start c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 objective_brahmagupta[105914]: 65534 65534
Mar  1 04:45:35 np0005634532 systemd[1]: libpod-c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e.scope: Deactivated successfully.
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.529458125 +0000 UTC m=+0.194540328 container attach c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.529835064 +0000 UTC m=+0.194917267 container died c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fccbeb1bb26de71e43370dbe3c8bd1ca82b0eb8ce4964f52c2d66d4a7d568a76-merged.mount: Deactivated successfully.
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Mar  1 04:45:35 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.594522273 +0000 UTC m=+0.259604476 container remove c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e (image=quay.io/prometheus/alertmanager:v0.25.0, name=objective_brahmagupta, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105897]: 2026-03-01 09:45:35.603921429 +0000 UTC m=+0.269003622 volume remove 1c5bafe5e25f39e8293c9863c12a79e12350b7fcd6ac1d586451395713564bd2
Mar  1 04:45:35 np0005634532 systemd[1]: libpod-conmon-c435cfe5a3c2d51350912880799f88c8d3c2db503282fcd7a264434622dacd5e.scope: Deactivated successfully.
Mar  1 04:45:35 np0005634532 systemd[1]: Stopping Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e75 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[99981]: ts=2026-03-01T09:45:35.847Z caller=main.go:583 level=info msg="Received SIGTERM, exiting gracefully..."
Mar  1 04:45:35 np0005634532 podman[105963]: 2026-03-01 09:45:35.858012875 +0000 UTC m=+0.045646030 container died 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:45:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:35.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:45:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-86f4aa5d32b62ff6304caebfb4474293eea7bd9a80057d2bb3c785a664cdfaa6-merged.mount: Deactivated successfully.
Mar  1 04:45:35 np0005634532 podman[105963]: 2026-03-01 09:45:35.922548539 +0000 UTC m=+0.110181704 container remove 79aaca671f71fae62bc8768d70f996bd09d03a5082fcac359db10cb2ffb3e479 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:35 np0005634532 podman[105963]: 2026-03-01 09:45:35.931295159 +0000 UTC m=+0.118928324 volume remove 7cdffed30ca1aee034e76c0481a56cbd47d16ba7182d3c6adb2e13bd6ca648d7
Mar  1 04:45:35 np0005634532 bash[105963]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0
Mar  1 04:45:36 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@alertmanager.compute-0.service: Deactivated successfully.
Mar  1 04:45:36 np0005634532 systemd[1]: Stopped Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:36 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@alertmanager.compute-0.service: Consumed 1.019s CPU time.
Mar  1 04:45:36 np0005634532 systemd[1]: Starting Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Mar  1 04:45:36 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 76 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=6 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=9}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:36 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 76 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=75) [1]/[0] async=[1] r=0 lpr=75 pi=[52,75)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:36 np0005634532 podman[106067]: 2026-03-01 09:45:36.211896852 +0000 UTC m=+0.041559847 volume create ca9549a21c5e030db0d8f55bb3157b3eb7e5e051cc85d8c81ed089821267d910
Mar  1 04:45:36 np0005634532 podman[106067]: 2026-03-01 09:45:36.222239483 +0000 UTC m=+0.051902488 container create 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2f3a004dac31249fb4eb7d9a4fe134d4a9d75db9b7b3929066cdcac46d657d/merged/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2f3a004dac31249fb4eb7d9a4fe134d4a9d75db9b7b3929066cdcac46d657d/merged/etc/alertmanager supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:36 np0005634532 podman[106067]: 2026-03-01 09:45:36.268138288 +0000 UTC m=+0.097801293 container init 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:36.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:36 np0005634532 podman[106067]: 2026-03-01 09:45:36.272316313 +0000 UTC m=+0.101979318 container start 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:36 np0005634532 bash[106067]: 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8
Mar  1 04:45:36 np0005634532 podman[106067]: 2026-03-01 09:45:36.196787322 +0000 UTC m=+0.026450357 image pull c8568f914cd25b2062c44e9f79f9c18da6e3b85fe0c47a12a2191c61426c2b19 quay.io/prometheus/alertmanager:v0.25.0
Mar  1 04:45:36 np0005634532 systemd[1]: Started Ceph alertmanager.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.305Z caller=main.go:240 level=info msg="Starting Alertmanager" version="(version=0.25.0, branch=HEAD, revision=258fab7cdd551f2cf251ed0348f0ad7289aee789)"
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.306Z caller=main.go:241 level=info build_context="(go=go1.19.4, user=root@abe866dd5717, date=20221222-14:51:36)"
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.315Z caller=cluster.go:185 level=info component=cluster msg="setting advertise address explicitly" addr=192.168.122.100 port=9094
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.317Z caller=cluster.go:681 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:36 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring grafana.compute-0 (dependencies changed)...
Mar  1 04:45:36 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring grafana.compute-0 (dependencies changed)...
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.369Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/alertmanager.yml
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.370Z caller=coordinator.go:126 level=info component=configuration msg="Completed loading of configuration file" file=/etc/alertmanager/alertmanager.yml
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.375Z caller=tls_config.go:232 level=info msg="Listening on" address=192.168.122.100:9093
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:36.375Z caller=tls_config.go:235 level=info msg="TLS is disabled." http2=false address=192.168.122.100:9093
Mar  1 04:45:36 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon grafana.compute-0 on compute-0
Mar  1 04:45:36 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon grafana.compute-0 on compute-0
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45280022a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:36 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.13 deep-scrub starts
Mar  1 04:45:36 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.13 deep-scrub ok
Mar  1 04:45:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: Reconfiguring grafana.compute-0 (dependencies changed)...
Mar  1 04:45:36 np0005634532 ceph-mon[75825]: Reconfiguring daemon grafana.compute-0 on compute-0
Mar  1 04:45:36 np0005634532 podman[106171]: 2026-03-01 09:45:36.985232017 +0000 UTC m=+0.053914378 container create 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 systemd[1]: Started libpod-conmon-67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd.scope.
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:37] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Mar  1 04:45:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:37] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:36.966904986 +0000 UTC m=+0.035587397 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:45:37 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:37.07988582 +0000 UTC m=+0.148568271 container init 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:37.086515377 +0000 UTC m=+0.155197778 container start 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:37.090566649 +0000 UTC m=+0.159249050 container attach 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 frosty_cerf[106187]: 472 0
Mar  1 04:45:37 np0005634532 systemd[1]: libpod-67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd.scope: Deactivated successfully.
Mar  1 04:45:37 np0005634532 conmon[106187]: conmon 67daae71bf9315cbb191 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd.scope/container/memory.events
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:37.092554019 +0000 UTC m=+0.161236410 container died 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:37 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-18bb6eeb932e2c54d0cfbfd8f3416058a0231879bb742b6aacc28a7f4bdeb209-merged.mount: Deactivated successfully.
Mar  1 04:45:37 np0005634532 podman[106171]: 2026-03-01 09:45:37.142448535 +0000 UTC m=+0.211130896 container remove 67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd (image=quay.io/ceph/grafana:10.4.0, name=frosty_cerf, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 systemd[1]: libpod-conmon-67daae71bf9315cbb1915097a6f7c739116077381ac4efe3122e3726897c94fd.scope: Deactivated successfully.
Mar  1 04:45:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Mar  1 04:45:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.201397828 +0000 UTC m=+0.043520786 container create c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 77 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=6 ec=52/35 lis/c=75/52 les/c/f=76/53/0 sis=77 pruub=14.978804588s) [1] async=[1] r=-1 lpr=77 pi=[52,77)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 211.941772461s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 77 pg[9.a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=6 ec=52/35 lis/c=75/52 les/c/f=76/53/0 sis=77 pruub=14.978724480s) [1] r=-1 lpr=77 pi=[52,77)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.941772461s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 77 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=5 ec=52/35 lis/c=75/52 les/c/f=76/53/0 sis=77 pruub=14.983719826s) [1] async=[1] r=-1 lpr=77 pi=[52,77)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 211.946899414s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 77 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=75/76 n=5 ec=52/35 lis/c=75/52 les/c/f=76/53/0 sis=77 pruub=14.983686447s) [1] r=-1 lpr=77 pi=[52,77)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 211.946899414s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:37 np0005634532 systemd[1]: Started libpod-conmon-c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7.scope.
Mar  1 04:45:37 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.262900856 +0000 UTC m=+0.105023824 container init c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.271118433 +0000 UTC m=+0.113241441 container start c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.175292521 +0000 UTC m=+0.017415519 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:45:37 np0005634532 exciting_pike[106222]: 472 0
Mar  1 04:45:37 np0005634532 systemd[1]: libpod-c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7.scope: Deactivated successfully.
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.276110489 +0000 UTC m=+0.118233457 container attach c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.276601151 +0000 UTC m=+0.118724159 container died c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-67a2ef801652c5e80b3deeb69f3b272a3b3d647c068842d281786aae6b2a811d-merged.mount: Deactivated successfully.
Mar  1 04:45:37 np0005634532 podman[106206]: 2026-03-01 09:45:37.315798868 +0000 UTC m=+0.157921876 container remove c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7 (image=quay.io/ceph/grafana:10.4.0, name=exciting_pike, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 systemd[1]: libpod-conmon-c15b9f117ca76145349d4dc9d6edf2b536bfd15d169abb63034cdd7483c2abc7.scope: Deactivated successfully.
Mar  1 04:45:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v26: 353 pgs: 4 remapped+peering, 349 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:37 np0005634532 systemd[1]: Stopping Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.12 scrub starts
Mar  1 04:45:37 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.12 scrub ok
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=server t=2026-03-01T09:45:37.61029603Z level=info msg="Shutdown started" reason="System signal: terminated"
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=ticker t=2026-03-01T09:45:37.610860635Z level=info msg=stopped last_tick=2026-03-01T09:45:30Z
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=tracing t=2026-03-01T09:45:37.61108103Z level=info msg="Closing tracing"
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=grafana-apiserver t=2026-03-01T09:45:37.611721576Z level=info msg="StorageObjectCountTracker pruner is exiting"
Mar  1 04:45:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[100651]: logger=sqlstore.transactions t=2026-03-01T09:45:37.623104263Z level=info msg="Database locked, sleeping then retrying" error="database is locked" retry=0 code="database is locked"
Mar  1 04:45:37 np0005634532 podman[106272]: 2026-03-01 09:45:37.64047077 +0000 UTC m=+0.076630420 container died b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-99ceb8d7cf309e93945076692841b1546b08a254d35d1ff0a9291efa622af710-merged.mount: Deactivated successfully.
Mar  1 04:45:37 np0005634532 podman[106272]: 2026-03-01 09:45:37.688196611 +0000 UTC m=+0.124356281 container remove b49a0763a78d98627ed91050fb560d2f12730abc25668f8d4e65a84ba776d2c6 (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:37 np0005634532 bash[106272]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0
Mar  1 04:45:37 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@grafana.compute-0.service: Deactivated successfully.
Mar  1 04:45:37 np0005634532 systemd[1]: Stopped Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:37 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@grafana.compute-0.service: Consumed 4.362s CPU time.
Mar  1 04:45:37 np0005634532 systemd[1]: Starting Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:45:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:45:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:37.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:45:38 np0005634532 podman[106378]: 2026-03-01 09:45:38.080492585 +0000 UTC m=+0.057255732 container create 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95cb3f88bc0331efa8045586fd0d1711ba9c756c3b3e99828e6a34dd67ba8bf/merged/etc/grafana/grafana.ini supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95cb3f88bc0331efa8045586fd0d1711ba9c756c3b3e99828e6a34dd67ba8bf/merged/etc/grafana/certs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95cb3f88bc0331efa8045586fd0d1711ba9c756c3b3e99828e6a34dd67ba8bf/merged/etc/grafana/provisioning/datasources supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95cb3f88bc0331efa8045586fd0d1711ba9c756c3b3e99828e6a34dd67ba8bf/merged/etc/grafana/provisioning/dashboards supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f95cb3f88bc0331efa8045586fd0d1711ba9c756c3b3e99828e6a34dd67ba8bf/merged/var/lib/grafana/grafana.db supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:38 np0005634532 podman[106378]: 2026-03-01 09:45:38.147543733 +0000 UTC m=+0.124306940 container init 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:38 np0005634532 podman[106378]: 2026-03-01 09:45:38.05804038 +0000 UTC m=+0.034803557 image pull c8b91775d855b99270fc5d22f3c6737e8cca01ef4c25c8b0362295e0746fa39b quay.io/ceph/grafana:10.4.0
Mar  1 04:45:38 np0005634532 podman[106378]: 2026-03-01 09:45:38.153577125 +0000 UTC m=+0.130340282 container start 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:38 np0005634532 bash[106378]: 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca
Mar  1 04:45:38 np0005634532 systemd[1]: Started Ceph grafana.compute-0 for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring crash.compute-1 (monmap changed)...
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring crash.compute-1 (monmap changed)...
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon crash.compute-1 on compute-1
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon crash.compute-1 on compute-1
Mar  1 04:45:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:38.272 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:38.318Z caller=cluster.go:706 level=info component=cluster msg="gossip not settled" polls=0 before=0 now=1 elapsed=2.000859701s
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372507845Z level=info msg="Starting Grafana" version=10.4.0 commit=03f502a94d17f7dc4e6c34acdf8428aedd986e4c branch=HEAD compiled=2026-03-01T09:45:38Z
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.37273026Z level=info msg="Config loaded from" file=/usr/share/grafana/conf/defaults.ini
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.37273675Z level=info msg="Config loaded from" file=/etc/grafana/grafana.ini
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.37274075Z level=info msg="Config overridden from command line" arg="default.paths.data=/var/lib/grafana"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372744491Z level=info msg="Config overridden from command line" arg="default.paths.logs=/var/log/grafana"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372747661Z level=info msg="Config overridden from command line" arg="default.paths.plugins=/var/lib/grafana/plugins"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372752781Z level=info msg="Config overridden from command line" arg="default.paths.provisioning=/etc/grafana/provisioning"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372755781Z level=info msg="Config overridden from command line" arg="default.log.mode=console"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372759201Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_DATA=/var/lib/grafana"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372762301Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_LOGS=/var/log/grafana"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372765251Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PLUGINS=/var/lib/grafana/plugins"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372768541Z level=info msg="Config overridden from Environment variable" var="GF_PATHS_PROVISIONING=/etc/grafana/provisioning"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372772551Z level=info msg=Target target=[all]
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372778021Z level=info msg="Path Home" path=/usr/share/grafana
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372781642Z level=info msg="Path Data" path=/var/lib/grafana
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372784662Z level=info msg="Path Logs" path=/var/log/grafana
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372787752Z level=info msg="Path Plugins" path=/var/lib/grafana/plugins
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372791012Z level=info msg="Path Provisioning" path=/etc/grafana/provisioning
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=settings t=2026-03-01T09:45:38.372793922Z level=info msg="App mode production"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=sqlstore t=2026-03-01T09:45:38.373065679Z level=info msg="Connecting to DB" dbtype=sqlite3
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=sqlstore t=2026-03-01T09:45:38.373082009Z level=warn msg="SQLite database file has broader permissions than it should" path=/var/lib/grafana/grafana.db mode=-rw-r--r-- expected=-rw-r-----
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=migrator t=2026-03-01T09:45:38.373595682Z level=info msg="Starting DB migrations"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=migrator t=2026-03-01T09:45:38.388441026Z level=info msg="migrations completed" performed=0 skipped=547 duration=571.185µs
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=sqlstore t=2026-03-01T09:45:38.389332538Z level=info msg="Created default organization"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=secrets t=2026-03-01T09:45:38.38980934Z level=info msg="Envelope encryption state" enabled=true currentprovider=secretKey.v1
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugin.store t=2026-03-01T09:45:38.405948806Z level=info msg="Loading plugins..."
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=local.finder t=2026-03-01T09:45:38.451807931Z level=warn msg="Skipping finding plugins as directory does not exist" path=/usr/share/grafana/plugins-bundled
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugin.store t=2026-03-01T09:45:38.451841121Z level=info msg="Plugins loaded" count=55 duration=45.892835ms
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=query_data t=2026-03-01T09:45:38.453969175Z level=info msg="Query Service initialization"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=live.push_http t=2026-03-01T09:45:38.456863938Z level=info msg="Live Push Gateway initialization"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.migration t=2026-03-01T09:45:38.461975177Z level=info msg=Starting
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.state.manager t=2026-03-01T09:45:38.48397003Z level=info msg="Running in alternative execution of Error/NoData mode"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=infra.usagestats.collector t=2026-03-01T09:45:38.487626412Z level=info msg="registering usage stat providers" usageStatsProvidersLen=2
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=provisioning.datasources t=2026-03-01T09:45:38.492578527Z level=info msg="inserting datasource from configuration" name=Dashboard1 uid=P43CA22E17D0F9596
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:38 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.e deep-scrub starts
Mar  1 04:45:38 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.e deep-scrub ok
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=provisioning.alerting t=2026-03-01T09:45:38.543704714Z level=info msg="starting to provision alerting"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=provisioning.alerting t=2026-03-01T09:45:38.543736185Z level=info msg="finished to provision alerting"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafanaStorageLogger t=2026-03-01T09:45:38.54393691Z level=info msg="Storage starting"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=http.server t=2026-03-01T09:45:38.548965636Z level=info msg="HTTP Server TLS settings" MinTLSVersion=TLS1.2 configuredciphers=TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=http.server t=2026-03-01T09:45:38.549621483Z level=info msg="HTTP Server Listen" address=192.168.122.100:3000 protocol=https subUrl= socket=
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.state.manager t=2026-03-01T09:45:38.549748276Z level=info msg="Warming state cache for startup"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.multiorg.alertmanager t=2026-03-01T09:45:38.558848265Z level=info msg="Starting MultiOrg Alertmanager"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugins.update.checker t=2026-03-01T09:45:38.615315866Z level=info msg="Update check succeeded" duration=70.71841ms
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana.update.checker t=2026-03-01T09:45:38.628400075Z level=info msg="Update check succeeded" duration=69.635402ms
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.state.manager t=2026-03-01T09:45:38.632812257Z level=info msg="State cache has been initialized" states=0 duration=83.06323ms
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ngalert.scheduler t=2026-03-01T09:45:38.632865138Z level=info msg="Starting scheduler" tickInterval=10s maxAttempts=1
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=ticker t=2026-03-01T09:45:38.63293152Z level=info msg=starting first_tick=2026-03-01T09:45:40Z
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=provisioning.dashboard t=2026-03-01T09:45:38.647983668Z level=info msg="starting to provision dashboards"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=provisioning.dashboard t=2026-03-01T09:45:38.672729861Z level=info msg="finished to provision dashboards"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528003730 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring osd.1 (monmap changed)...
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring osd.1 (monmap changed)...
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon osd.1 on compute-1
Mar  1 04:45:38 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon osd.1 on compute-1
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana-apiserver t=2026-03-01T09:45:38.977709638Z level=info msg="Adding GroupVersion playlist.grafana.app v0alpha1 to ResourceManager"
Mar  1 04:45:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana-apiserver t=2026-03-01T09:45:38.97817564Z level=info msg="Adding GroupVersion featuretoggle.grafana.app v0alpha1 to ResourceManager"
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: Reconfiguring crash.compute-1 (monmap changed)...
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-1", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: Reconfiguring daemon crash.compute-1 on compute-1
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:39 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v28: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 181 B/s, 8 objects/s recovering
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.b scrub starts
Mar  1 04:45:39 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.b scrub ok
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-1 (monmap changed)...
Mar  1 04:45:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-1 (monmap changed)...
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:39 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-1 on compute-1
Mar  1 04:45:39 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-1 on compute-1
Mar  1 04:45:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:39.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: Reconfiguring osd.1 (monmap changed)...
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: Reconfiguring daemon osd.1 on compute-1
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:40.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:40 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring node-exporter.compute-1 (unknown last config time)...
Mar  1 04:45:40 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring node-exporter.compute-1 (unknown last config time)...
Mar  1 04:45:40 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon node-exporter.compute-1 on compute-1
Mar  1 04:45:40 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon node-exporter.compute-1 on compute-1
Mar  1 04:45:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:40 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Mar  1 04:45:40 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Mar  1 04:45:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e79 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: Reconfiguring mon.compute-1 (monmap changed)...
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: Reconfiguring daemon mon.compute-1 on compute-1
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: Reconfiguring node-exporter.compute-1 (unknown last config time)...
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: Reconfiguring daemon node-exporter.compute-1 on compute-1
Mar  1 04:45:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:41 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528003730 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v30: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 147 B/s, 6 objects/s recovering
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Mar  1 04:45:41 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.8 scrub starts
Mar  1 04:45:41 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.8 scrub ok
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:41 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-2 (monmap changed)...
Mar  1 04:45:41 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-2 (monmap changed)...
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:41 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-2 on compute-2
Mar  1 04:45:41 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-2 on compute-2
Mar  1 04:45:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:41.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Mar  1 04:45:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:42.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-2.dikzlj (monmap changed)...
Mar  1 04:45:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-2.dikzlj (monmap changed)...
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:42 np0005634532 ceph-mgr[76134]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:45:42 np0005634532 ceph-mgr[76134]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:45:42 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Mar  1 04:45:42 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Mar  1 04:45:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:43 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: Reconfiguring mon.compute-2 (monmap changed)...
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: Reconfiguring daemon mon.compute-2 on compute-2
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: Reconfiguring mgr.compute-2.dikzlj (monmap changed)...
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-2.dikzlj", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: Reconfiguring daemon mgr.compute-2.dikzlj on compute-2
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-alertmanager-api-host"} v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-alertmanager-api-host"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard get-grafana-api-url"} v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard get-grafana-api-url"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"} v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='mon.0 -' entity='mon.' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config set, name=mgr/dashboard/GRAFANA_API_URL}] v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO root] Restarting engine...
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE Bus STOPPING
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE Bus STOPPING
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v32: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail; 128 B/s, 5 objects/s recovering
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0)
Mar  1 04:45:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE Bus STOPPED
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE Bus STARTING
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE HTTP Server cherrypy._cpwsgi_server.CPWSGIServer(('::', 9283)) shut down
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE Bus STOPPED
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE Bus STARTING
Mar  1 04:45:43 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.2 deep-scrub starts
Mar  1 04:45:43 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.2 deep-scrub ok
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE Serving on http://:::9283
Mar  1 04:45:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: [01/Mar/2026:09:45:43] ENGINE Bus STARTED
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE Serving on http://:::9283
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.error] [01/Mar/2026:09:45:43] ENGINE Bus STARTED
Mar  1 04:45:43 np0005634532 ceph-mgr[76134]: [prometheus INFO root] Engine started.
Mar  1 04:45:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:43.891 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:44 np0005634532 podman[106561]: 2026-03-01 09:45:44.172192184 +0000 UTC m=+0.061470388 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:44.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "dashboard set-grafana-api-url", "value": "https://192.168.122.100:3000"}]: dispatch
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Mar  1 04:45:44 np0005634532 podman[106561]: 2026-03-01 09:45:44.291318202 +0000 UTC m=+0.180596366 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Mar  1 04:45:44 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Mar  1 04:45:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:44 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Mar  1 04:45:44 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Mar  1 04:45:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c0042e0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:44 np0005634532 podman[106700]: 2026-03-01 09:45:44.938788389 +0000 UTC m=+0.071145091 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:45 np0005634532 podman[106725]: 2026-03-01 09:45:45.031229756 +0000 UTC m=+0.074357722 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:45 np0005634532 podman[106700]: 2026-03-01 09:45:45.036303594 +0000 UTC m=+0.168660306 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:45 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Mar  1 04:45:45 np0005634532 podman[106775]: 2026-03-01 09:45:45.402363817 +0000 UTC m=+0.195739167 container exec ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 04:45:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v34: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0)
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Mar  1 04:45:45 np0005634532 podman[106795]: 2026-03-01 09:45:45.501282466 +0000 UTC m=+0.075112851 container exec_died ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:45:45 np0005634532 podman[106775]: 2026-03-01 09:45:45.508050437 +0000 UTC m=+0.301425757 container exec_died ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:45:45 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Mar  1 04:45:45 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Mar  1 04:45:45 np0005634532 podman[106840]: 2026-03-01 09:45:45.758239034 +0000 UTC m=+0.075681916 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:45:45 np0005634532 podman[106840]: 2026-03-01 09:45:45.771377995 +0000 UTC m=+0.088820817 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e82 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:45:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:45.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:45:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:46 np0005634532 podman[106906]: 2026-03-01 09:45:46.046026298 +0000 UTC m=+0.072143337 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, build-date=2023-02-22T09:23:20, vcs-type=git, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, architecture=x86_64, io.openshift.tags=Ceph keepalived, com.redhat.component=keepalived-container, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1793, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., distribution-scope=public, io.openshift.expose-services=, name=keepalived, io.buildah.version=1.28.2, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vendor=Red Hat, Inc., description=keepalived for Ceph)
Mar  1 04:45:46 np0005634532 podman[106906]: 2026-03-01 09:45:46.057311062 +0000 UTC m=+0.083428041 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., architecture=x86_64, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, release=1793, name=keepalived, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, com.redhat.component=keepalived-container, distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.28.2, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Mar  1 04:45:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:46.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:46 np0005634532 podman[106967]: 2026-03-01 09:45:46.298557274 +0000 UTC m=+0.064357431 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:45:46.319Z caller=cluster.go:698 level=info component=cluster msg="gossip settled; proceeding" elapsed=10.001729024s
Mar  1 04:45:46 np0005634532 podman[106967]: 2026-03-01 09:45:46.334484828 +0000 UTC m=+0.100284975 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Mar  1 04:45:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Mar  1 04:45:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:46 np0005634532 podman[107041]: 2026-03-01 09:45:46.590033303 +0000 UTC m=+0.069704843 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:46 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.1c scrub starts
Mar  1 04:45:46 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.1c scrub ok
Mar  1 04:45:46 np0005634532 podman[107041]: 2026-03-01 09:45:46.788456494 +0000 UTC m=+0.268127994 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:45:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:47] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:47] "GET /metrics HTTP/1.1" 200 48283 "" "Prometheus/2.51.0"
Mar  1 04:45:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:47 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v37: 353 pgs: 353 active+clean; 456 KiB data, 125 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Mar  1 04:45:47 np0005634532 podman[107152]: 2026-03-01 09:45:47.483827986 +0000 UTC m=+0.383293950 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:45:47 np0005634532 podman[107152]: 2026-03-01 09:45:47.557902699 +0000 UTC m=+0.457368663 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:45:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:47 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.c scrub starts
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.c scrub ok
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:45:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:45:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:47.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:48.281 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.330307271 +0000 UTC m=+0.057995173 container create a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 04:45:48 np0005634532 systemd[1]: Started libpod-conmon-a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23.scope.
Mar  1 04:45:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.301257264 +0000 UTC m=+0.028945226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.407938766 +0000 UTC m=+0.135626718 container init a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.416521453 +0000 UTC m=+0.144209365 container start a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.421406745 +0000 UTC m=+0.149094647 container attach a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:48 np0005634532 funny_vaughan[107306]: 167 167
Mar  1 04:45:48 np0005634532 systemd[1]: libpod-a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23.scope: Deactivated successfully.
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.422874819 +0000 UTC m=+0.150562721 container died a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4d5923500c7282f9552bd00d04172050f89bdb16653670dd57b1050f44846be3-merged.mount: Deactivated successfully.
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Mar  1 04:45:48 np0005634532 podman[107289]: 2026-03-01 09:45:48.474730411 +0000 UTC m=+0.202418313 container remove a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:45:48 np0005634532 systemd[1]: libpod-conmon-a9636e7110e13eb582f618877d8a6b669d176a30128185fe4ac2d0f2eaab7b23.scope: Deactivated successfully.
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Mar  1 04:45:48 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Mar  1 04:45:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:48 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.19 scrub starts
Mar  1 04:45:48 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.19 scrub ok
Mar  1 04:45:48 np0005634532 podman[107330]: 2026-03-01 09:45:48.688692639 +0000 UTC m=+0.065212950 container create f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:48 np0005634532 systemd[1]: Started libpod-conmon-f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc.scope.
Mar  1 04:45:48 np0005634532 podman[107330]: 2026-03-01 09:45:48.659647331 +0000 UTC m=+0.036167622 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:48 np0005634532 podman[107330]: 2026-03-01 09:45:48.79359868 +0000 UTC m=+0.170118911 container init f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 04:45:48 np0005634532 podman[107330]: 2026-03-01 09:45:48.800022138 +0000 UTC m=+0.176542329 container start f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:45:48 np0005634532 podman[107330]: 2026-03-01 09:45:48.813464137 +0000 UTC m=+0.189984338 container attach f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:45:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:49 np0005634532 hardcore_northcutt[107347]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:45:49 np0005634532 hardcore_northcutt[107347]: --> All data devices are unavailable
Mar  1 04:45:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:49 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:49 np0005634532 systemd[1]: libpod-f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc.scope: Deactivated successfully.
Mar  1 04:45:49 np0005634532 podman[107330]: 2026-03-01 09:45:49.115724253 +0000 UTC m=+0.492244464 container died f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 04:45:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-86fa208bc613a4b63eba56c1e0f23d4991a4149da0f1464f696802284e148113-merged.mount: Deactivated successfully.
Mar  1 04:45:49 np0005634532 podman[107330]: 2026-03-01 09:45:49.162708453 +0000 UTC m=+0.539228654 container remove f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_northcutt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:45:49 np0005634532 systemd[1]: libpod-conmon-f1e122e65bfd1b6f2833862073086dbaacfbf986106045e67c5a309f43a236cc.scope: Deactivated successfully.
Mar  1 04:45:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v40: 353 pgs: 2 unknown, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Mar  1 04:45:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Mar  1 04:45:49 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Mar  1 04:45:49 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.10 scrub starts
Mar  1 04:45:49 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 12.10 scrub ok
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.771913125 +0000 UTC m=+0.059077739 container create 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:49 np0005634532 systemd[1]: Started libpod-conmon-68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d.scope.
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.748850295 +0000 UTC m=+0.036014909 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.862874136 +0000 UTC m=+0.150038740 container init 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.872261801 +0000 UTC m=+0.159426385 container start 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.876436757 +0000 UTC m=+0.163601411 container attach 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:45:49 np0005634532 friendly_shtern[107481]: 167 167
Mar  1 04:45:49 np0005634532 systemd[1]: libpod-68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d.scope: Deactivated successfully.
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.878827772 +0000 UTC m=+0.165992366 container died 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:49.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-28a61f7becfb7c68732cdf52c1bb4eff62395623961a7e8d00132deb817929df-merged.mount: Deactivated successfully.
Mar  1 04:45:49 np0005634532 podman[107464]: 2026-03-01 09:45:49.918530875 +0000 UTC m=+0.205695449 container remove 68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_shtern, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:49 np0005634532 systemd[1]: libpod-conmon-68c045fbb9861c0310ff461ad66d365564e8df915104f57b86263ef960c9268d.scope: Deactivated successfully.
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.066942046 +0000 UTC m=+0.049316975 container create 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Mar  1 04:45:50 np0005634532 systemd[1]: Started libpod-conmon-9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a.scope.
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.04273062 +0000 UTC m=+0.025105599 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:50 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff75448fcb84de64861e86e1c89884d4c84dd30c40117264c5f4c09737d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff75448fcb84de64861e86e1c89884d4c84dd30c40117264c5f4c09737d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff75448fcb84de64861e86e1c89884d4c84dd30c40117264c5f4c09737d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:50 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0baaff75448fcb84de64861e86e1c89884d4c84dd30c40117264c5f4c09737d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.189351759 +0000 UTC m=+0.171726748 container init 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.199688597 +0000 UTC m=+0.182063536 container start 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.205145922 +0000 UTC m=+0.187520912 container attach 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:50 np0005634532 systemd-logind[832]: New session 39 of user zuul.
Mar  1 04:45:50 np0005634532 systemd[1]: Started Session 39 of User zuul.
Mar  1 04:45:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:45:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:50.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:45:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Mar  1 04:45:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:50 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]: {
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:    "0": [
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:        {
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "devices": [
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "/dev/loop3"
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            ],
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "lv_name": "ceph_lv0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "lv_size": "21470642176",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "name": "ceph_lv0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "tags": {
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.cluster_name": "ceph",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.crush_device_class": "",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.encrypted": "0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.osd_id": "0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.type": "block",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.vdo": "0",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:                "ceph.with_tpm": "0"
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            },
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "type": "block",
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:            "vg_name": "ceph_vg0"
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:        }
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]:    ]
Mar  1 04:45:50 np0005634532 amazing_montalcini[107551]: }
Mar  1 04:45:50 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Mar  1 04:45:50 np0005634532 systemd[1]: libpod-9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a.scope: Deactivated successfully.
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.590116961 +0000 UTC m=+0.572491860 container died 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:45:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0baaff75448fcb84de64861e86e1c89884d4c84dd30c40117264c5f4c09737d0-merged.mount: Deactivated successfully.
Mar  1 04:45:50 np0005634532 podman[107532]: 2026-03-01 09:45:50.634061911 +0000 UTC m=+0.616436820 container remove 9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_montalcini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:45:50 np0005634532 systemd[1]: libpod-conmon-9e7f3735f575eb6ab2102502aa0ab651aeb3448dc3629ef0c5f3541be6a7791a.scope: Deactivated successfully.
Mar  1 04:45:50 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Mar  1 04:45:50 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Mar  1 04:45:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e87 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:50 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:50 np0005634532 python3.9[107773]: ansible-ansible.legacy.ping Invoked with data=pong
Mar  1 04:45:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:51 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.199757273 +0000 UTC m=+0.051185767 container create 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:45:51 np0005634532 systemd[1]: Started libpod-conmon-770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c.scope.
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.170183203 +0000 UTC m=+0.021611747 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:51 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.280587991 +0000 UTC m=+0.132016525 container init 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.289320202 +0000 UTC m=+0.140748686 container start 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.293039597 +0000 UTC m=+0.144468151 container attach 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:45:51 np0005634532 laughing_volhard[107912]: 167 167
Mar  1 04:45:51 np0005634532 systemd[1]: libpod-770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c.scope: Deactivated successfully.
Mar  1 04:45:51 np0005634532 conmon[107912]: conmon 770e101627890cbb4cb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c.scope/container/memory.events
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.295039063 +0000 UTC m=+0.146467587 container died 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 04:45:51 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ceb960bc76914de5149be37c38df737086a1e316e65c81da593d919740a86238-merged.mount: Deactivated successfully.
Mar  1 04:45:51 np0005634532 podman[107867]: 2026-03-01 09:45:51.339958246 +0000 UTC m=+0.191386710 container remove 770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_volhard, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:45:51 np0005634532 systemd[1]: libpod-conmon-770e101627890cbb4cb5735c1eeabbfe4d04963abdb0285bd89c793953dff67c.scope: Deactivated successfully.
Mar  1 04:45:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v43: 353 pgs: 2 unknown, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:51 np0005634532 podman[107935]: 2026-03-01 09:45:51.498725195 +0000 UTC m=+0.055608589 container create ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:45:51 np0005634532 systemd[1]: Started libpod-conmon-ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f.scope.
Mar  1 04:45:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Mar  1 04:45:51 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:45:51 np0005634532 podman[107935]: 2026-03-01 09:45:51.473513645 +0000 UTC m=+0.030397089 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:45:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11767cf8a1d25c73ff018fa3c9070a4c7085dd53837d4ee13ad4ff048b1bb1a3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11767cf8a1d25c73ff018fa3c9070a4c7085dd53837d4ee13ad4ff048b1bb1a3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11767cf8a1d25c73ff018fa3c9070a4c7085dd53837d4ee13ad4ff048b1bb1a3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Mar  1 04:45:51 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11767cf8a1d25c73ff018fa3c9070a4c7085dd53837d4ee13ad4ff048b1bb1a3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:45:51 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Mar  1 04:45:51 np0005634532 podman[107935]: 2026-03-01 09:45:51.588527789 +0000 UTC m=+0.145411223 container init ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:45:51 np0005634532 podman[107935]: 2026-03-01 09:45:51.597131106 +0000 UTC m=+0.154014490 container start ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:45:51 np0005634532 podman[107935]: 2026-03-01 09:45:51.60118245 +0000 UTC m=+0.158065884 container attach ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:45:51 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Mar  1 04:45:51 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Mar  1 04:45:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:51.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:52 np0005634532 python3.9[108055]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:45:52 np0005634532 lvm[108131]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:45:52 np0005634532 lvm[108131]: VG ceph_vg0 finished
Mar  1 04:45:52 np0005634532 sharp_allen[107955]: {}
Mar  1 04:45:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:52.288 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:52 np0005634532 systemd[1]: libpod-ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f.scope: Deactivated successfully.
Mar  1 04:45:52 np0005634532 systemd[1]: libpod-ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f.scope: Consumed 1.142s CPU time.
Mar  1 04:45:52 np0005634532 podman[107935]: 2026-03-01 09:45:52.326617643 +0000 UTC m=+0.883501037 container died ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid)
Mar  1 04:45:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay-11767cf8a1d25c73ff018fa3c9070a4c7085dd53837d4ee13ad4ff048b1bb1a3-merged.mount: Deactivated successfully.
Mar  1 04:45:52 np0005634532 podman[107935]: 2026-03-01 09:45:52.376236994 +0000 UTC m=+0.933120398 container remove ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:45:52 np0005634532 systemd[1]: libpod-conmon-ac938a6d305598d06f5c897d2db62205a42c8a123f1e4ae2396bd0ab67395f4f.scope: Deactivated successfully.
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:52 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:45:52 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Mar  1 04:45:52 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Mar  1 04:45:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:52 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c004300 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:53 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:53 np0005634532 python3.9[108324]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:45:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v45: 353 pgs: 2 unknown, 2 peering, 349 active+clean; 457 KiB data, 143 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:53 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.c scrub starts
Mar  1 04:45:53 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.c scrub ok
Mar  1 04:45:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000046s ======
Mar  1 04:45:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:53.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000046s
Mar  1 04:45:54 np0005634532 python3.9[108480]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:45:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:54.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:54 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:54 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.0 scrub starts
Mar  1 04:45:54 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.0 scrub ok
Mar  1 04:45:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:54 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500003c10 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:55 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:55 np0005634532 python3.9[108636]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:45:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v46: 353 pgs: 353 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0)
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 89 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=2 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=89 pruub=14.356719017s) [1] r=-1 lpr=89 pi=[52,89)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 229.736679077s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 89 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=2 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=89 pruub=14.356573105s) [1] r=-1 lpr=89 pi=[52,89)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.736679077s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e89 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Mar  1 04:45:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 90 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=2 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=90) [1]/[0] r=0 lpr=90 pi=[52,90)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 90 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=2 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=90) [1]/[0] r=0 lpr=90 pi=[52,90)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:55 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Mar  1 04:45:55 np0005634532 python3.9[108789]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:45:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:55.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:45:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:56.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:45:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:56 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Mar  1 04:45:56 np0005634532 python3.9[108941]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:45:56 np0005634532 network[108959]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:45:56 np0005634532 network[108960]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:45:56 np0005634532 network[108961]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:45:56 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Mar  1 04:45:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Mar  1 04:45:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Mar  1 04:45:56 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Mar  1 04:45:56 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Mar  1 04:45:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:56 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 91 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=90/91 n=2 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=90) [1]/[0] async=[1] r=0 lpr=90 pi=[52,90)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:57] "GET /metrics HTTP/1.1" 200 48284 "" "Prometheus/2.51.0"
Mar  1 04:45:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:45:57] "GET /metrics HTTP/1.1" 200 48284 "" "Prometheus/2.51.0"
Mar  1 04:45:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:57 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080014d0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v50: 353 pgs: 353 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0)
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 92 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=90/91 n=2 ec=52/35 lis/c=90/52 les/c/f=91/53/0 sis=92 pruub=15.197815895s) [1] async=[1] r=-1 lpr=92 pi=[52,92)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 232.798263550s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 92 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=92 pruub=12.136138916s) [1] r=-1 lpr=92 pi=[52,92)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 229.736602783s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 92 pg[9.10( v 42'1010 (0'0,42'1010] local-lis/les=90/91 n=2 ec=52/35 lis/c=90/52 les/c/f=91/53/0 sis=92 pruub=15.197769165s) [1] r=-1 lpr=92 pi=[52,92)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 232.798263550s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 92 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=92 pruub=12.136094093s) [1] r=-1 lpr=92 pi=[52,92)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 229.736602783s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:45:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Mar  1 04:45:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:45:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:57.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:45:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:45:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:45:58.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:45:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:58 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c001080 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Mar  1 04:45:58 np0005634532 ceph-osd[84309]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Mar  1 04:45:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Mar  1 04:45:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Mar  1 04:45:58 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Mar  1 04:45:58 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 93 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=93) [1]/[0] r=0 lpr=93 pi=[52,93)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:45:58 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 93 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=93) [1]/[0] r=0 lpr=93 pi=[52,93)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:45:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:58 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Mar  1 04:45:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:45:59 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:45:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v53: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:45:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Mar  1 04:45:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Mar  1 04:45:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Mar  1 04:45:59 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 94 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=93/94 n=5 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=93) [1]/[0] async=[1] r=0 lpr=93 pi=[52,93)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:45:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:45:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:45:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:45:59.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:00.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:00 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Mar  1 04:46:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Mar  1 04:46:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Mar  1 04:46:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 95 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=93/94 n=5 ec=52/35 lis/c=93/52 les/c/f=94/53/0 sis=95 pruub=15.050878525s) [1] async=[1] r=-1 lpr=95 pi=[52,95)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 235.633407593s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 95 pg[9.11( v 42'1010 (0'0,42'1010] local-lis/les=93/94 n=5 ec=52/35 lis/c=93/52 les/c/f=94/53/0 sis=95 pruub=15.050796509s) [1] r=-1 lpr=95 pi=[52,95)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 235.633407593s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:00 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:01 np0005634532 python3.9[109226]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:46:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:01 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v56: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:46:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094601 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:46:01 np0005634532 python3.9[109376]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:46:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Mar  1 04:46:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Mar  1 04:46:01 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Mar  1 04:46:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:01.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:02.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:46:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:46:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:02 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:02 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:03 np0005634532 python3.9[109532]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:46:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:03 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v58: 353 pgs: 1 unknown, 1 peering, 351 active+clean; 457 KiB data, 147 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:46:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:03.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:04 np0005634532 python3.9[109692]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:46:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:04.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:04 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:04 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:05 np0005634532 python3.9[109778]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:46:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:05 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002680 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v59: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0)
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Mar  1 04:46:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 97 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=97 pruub=12.316893578s) [1] r=-1 lpr=97 pi=[52,97)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 237.741821289s@ mbc={}] PeeringState::start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 97 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=97 pruub=12.316737175s) [1] r=-1 lpr=97 pi=[52,97)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 237.741821289s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e97 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Mar  1 04:46:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Mar  1 04:46:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 98 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=98) [1]/[0] r=0 lpr=98 pi=[52,98)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] PeeringState::start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 98 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=52/53 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=98) [1]/[0] r=0 lpr=98 pi=[52,98)/1 crt=42'1010 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:05.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:06.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:06 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Mar  1 04:46:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:06 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4508002ec0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Mar  1 04:46:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Mar  1 04:46:06 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Mar  1 04:46:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:06 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:06 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 99 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=52/52 les/c/f=53/53/0 sis=98) [1]/[0] async=[1] r=0 lpr=98 pi=[52,98)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:46:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:07] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Mar  1 04:46:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:07] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Mar  1 04:46:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:07 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:07 np0005634532 systemd[94532]: Starting Mark boot as successful...
Mar  1 04:46:07 np0005634532 systemd[94532]: Finished Mark boot as successful.
Mar  1 04:46:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v63: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0)
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Mar  1 04:46:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Mar  1 04:46:07 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115078926s) [1] async=[1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 active pruub 242.738540649s@ mbc={255={}}] PeeringState::start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:07 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:07.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:08.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:08 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002fa0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Mar  1 04:46:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Mar  1 04:46:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:08 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080037a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Mar  1 04:46:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Mar  1 04:46:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:09 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v66: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 260 B/s wr, 1 op/s; 28 B/s, 1 objects/s recovering
Mar  1 04:46:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:09.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:10.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:10 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:10 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:46:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e101 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:10 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c002fa0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:11 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080037a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v67: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 914 B/s rd, 182 B/s wr, 1 op/s; 19 B/s, 0 objects/s recovering
Mar  1 04:46:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:11.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:12.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:12 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:12 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:13 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v68: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 776 B/s rd, 155 B/s wr, 1 op/s; 16 B/s, 0 objects/s recovering
Mar  1 04:46:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:13 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:46:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:13 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:46:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:13.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:14.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:14 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:14 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:15 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v69: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 895 B/s wr, 2 op/s; 13 B/s, 0 objects/s recovering
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0)
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Mar  1 04:46:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:15.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:16.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:16 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Mar  1 04:46:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:46:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:16 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:17] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:17] "GET /metrics HTTP/1.1" 200 48252 "" "Prometheus/2.51.0"
Mar  1 04:46:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:17 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:46:17
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.nfs', '.rgw.root', '.mgr', 'images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.meta', 'vms', 'cephfs.cephfs.meta', 'volumes']
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v71: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 958 B/s rd, 718 B/s wr, 1 op/s
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0)
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Mar  1 04:46:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:46:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:46:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:17.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:18.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:18 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Mar  1 04:46:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Mar  1 04:46:18 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Mar  1 04:46:18 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Mar  1 04:46:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:18 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:19 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v74: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.8 KiB/s wr, 5 op/s
Mar  1 04:46:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Mar  1 04:46:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Mar  1 04:46:19 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Mar  1 04:46:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:19.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:20.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:20 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Mar  1 04:46:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Mar  1 04:46:20 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Mar  1 04:46:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:20 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:21 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v77: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Mar  1 04:46:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Mar  1 04:46:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Mar  1 04:46:21 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Mar  1 04:46:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:21.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:22.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:22 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:23 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v79: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:46:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094623 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:46:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:23.958 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000022s ======
Mar  1 04:46:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:24.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Mar  1 04:46:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:24 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:24 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f45080044b0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:25 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v80: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s; 36 B/s, 1 objects/s recovering
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0)
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Mar  1 04:46:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Mar  1 04:46:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:25.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:26.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Mar  1 04:46:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:26 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Mar  1 04:46:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Mar  1 04:46:26 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Mar  1 04:46:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:26 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:27] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Mar  1 04:46:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:27] "GET /metrics HTTP/1.1" 200 48251 "" "Prometheus/2.51.0"
Mar  1 04:46:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:27 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c001230 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v84: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 179 B/s rd, 0 B/s wr, 0 op/s; 38 B/s, 1 objects/s recovering
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0)
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Mar  1 04:46:27 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Mar  1 04:46:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:27.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:28.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:28 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Mar  1 04:46:28 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Mar  1 04:46:28 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Mar  1 04:46:28 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Mar  1 04:46:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:28 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:29 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v87: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 521 B/s rd, 0 op/s; 28 B/s, 1 objects/s recovering
Mar  1 04:46:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:29.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:30.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:30 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c001230 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e112 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:30 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:31 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v88: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 365 B/s rd, 0 op/s; 19 B/s, 0 objects/s recovering
Mar  1 04:46:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:31.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:32.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:46:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:46:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:32 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4500001090 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:32 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f450c001230 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:33 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v89: 353 pgs: 1 peering, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 310 B/s rd, 0 op/s; 16 B/s, 0 objects/s recovering
Mar  1 04:46:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:33.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:34.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:34 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:35 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v90: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 383 B/s rd, 0 op/s; 13 B/s, 0 objects/s recovering
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0)
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Mar  1 04:46:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e113 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:35.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:36.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Mar  1 04:46:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:36 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Mar  1 04:46:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:37] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Mar  1 04:46:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:37 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v92: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 119 B/s rd, 0 op/s
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0)
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Mar  1 04:46:37 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Mar  1 04:46:37 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:37.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:38.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:38 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44f8000b60 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Mar  1 04:46:38 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Mar  1 04:46:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:39 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Mar  1 04:46:39 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Mar  1 04:46:39 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:39 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v95: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:46:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:39.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Mar  1 04:46:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:40.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e116 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Mar  1 04:46:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:40 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Mar  1 04:46:40 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Mar  1 04:46:40 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:40 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:41 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44f80016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v98: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:46:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Mar  1 04:46:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:41.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Mar  1 04:46:42 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Mar  1 04:46:42 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:46:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:42.339 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:42 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:43 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004000 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v100: 353 pgs: 1 unknown, 352 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:46:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:43.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:44.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44f80016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:44 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:45 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v101: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0)
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Mar  1 04:46:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:46:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000022s ======
Mar  1 04:46:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:45.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000022s
Mar  1 04:46:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:46.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:46 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004020 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Mar  1 04:46:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Mar  1 04:46:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Mar  1 04:46:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Mar  1 04:46:46 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:46 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:46 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f44f80016a0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:55830] [POST] [200] [0.136s] [4.0B] [f2b8cc7b-2d11-4863-9c0b-5059dd6053ae] /api/prometheus_receiver
Mar  1 04:46:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:47] "GET /metrics HTTP/1.1" 200 48255 "" "Prometheus/2.51.0"
Mar  1 04:46:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:47 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f451c003cb0 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v104: 353 pgs: 353 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 18 B/s, 1 objects/s recovering
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0)
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fe0f91a0280>)]
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: [('cephfs', <mgr_util.CephfsConnectionPool.Connection object at 0x7fe0e5ec8460>)]
Mar  1 04:46:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] disconnecting from cephfs 'cephfs'
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Mar  1 04:46:47 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Mar  1 04:46:47 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000023s ======
Mar  1 04:46:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:48.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000023s
Mar  1 04:46:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:48.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:48 np0005634532 python3.9[110187]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:46:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4528004440 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:46:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Mar  1 04:46:48 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Mar  1 04:46:48 np0005634532 kernel: ganesha.nfsd[100242]: segfault at 50 ip 00007f45ae5ef32e sp 00007f4516ffc210 error 4 in libntirpc.so.5.8[7f45ae5d4000+2c000] likely on CPU 4 (core 0, socket 4)
Mar  1 04:46:48 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:46:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[98708]: 01/03/2026 09:46:48 : epoch 69a40a75 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f4504004040 fd 49 proxy ignored for local
Mar  1 04:46:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Mar  1 04:46:48 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Mar  1 04:46:48 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:48 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:48 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:48 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:48 np0005634532 systemd[1]: Created slice Slice /system/systemd-coredump.
Mar  1 04:46:49 np0005634532 systemd[1]: Started Process Core Dump (PID 110194/UID 0).
Mar  1 04:46:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v107: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 767 B/s wr, 1 op/s; 27 B/s, 0 objects/s recovering
Mar  1 04:46:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:50.005 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:50.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Mar  1 04:46:51 np0005634532 python3.9[110505]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Mar  1 04:46:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v108: 353 pgs: 1 unknown, 1 active+remapped, 351 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 351 B/s rd, 527 B/s wr, 0 op/s; 18 B/s, 0 objects/s recovering
Mar  1 04:46:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:46:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:52.007 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:46:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:52.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:52 np0005634532 python3.9[110662]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Mar  1 04:46:52 np0005634532 systemd-coredump[110195]: Process 98712 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007f45ae5ef32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f45ae5f9900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Mar  1 04:46:52 np0005634532 systemd[1]: systemd-coredump@0-110194-0.service: Deactivated successfully.
Mar  1 04:46:52 np0005634532 podman[110691]: 2026-03-01 09:46:52.679582055 +0000 UTC m=+0.026441876 container died ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:46:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Mar  1 04:46:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay-66ba47b0fcac64a5f83edb16222d2fde1be569942a3302a74a1b384c737fed06-merged.mount: Deactivated successfully.
Mar  1 04:46:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : mgrmap e34: compute-0.ebwufc(active, since 95s), standbys: compute-2.dikzlj, compute-1.uyojxx
Mar  1 04:46:52 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Mar  1 04:46:52 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:46:52 np0005634532 podman[110691]: 2026-03-01 09:46:52.900155816 +0000 UTC m=+0.247015637 container remove ddbb100a053bd1c5872d5920a93f96a6167721638261082337a0485339967db0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:46:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:46:53 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:46:53 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.458s CPU time.
Mar  1 04:46:53 np0005634532 python3.9[110911]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:46:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v110: 353 pgs: 1 active+remapped, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 509 B/s wr, 0 op/s; 36 B/s, 0 objects/s recovering
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0)
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Mar  1 04:46:53 np0005634532 podman[111007]: 2026-03-01 09:46:53.549248746 +0000 UTC m=+0.099001223 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:46:53 np0005634532 podman[111007]: 2026-03-01 09:46:53.6524248 +0000 UTC m=+0.202177257 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Mar  1 04:46:53 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Mar  1 04:46:53 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:53 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:54.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:54 np0005634532 podman[111247]: 2026-03-01 09:46:54.183403886 +0000 UTC m=+0.057663419 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:54 np0005634532 podman[111247]: 2026-03-01 09:46:54.218386432 +0000 UTC m=+0.092645945 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:54.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:54 np0005634532 python3.9[111288]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Mar  1 04:46:54 np0005634532 podman[111420]: 2026-03-01 09:46:54.714373452 +0000 UTC m=+0.057572157 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:46:54 np0005634532 podman[111420]: 2026-03-01 09:46:54.749481961 +0000 UTC m=+0.092680706 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:46:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Mar  1 04:46:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Mar  1 04:46:54 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Mar  1 04:46:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Mar  1 04:46:54 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:46:54 np0005634532 podman[111488]: 2026-03-01 09:46:54.993359449 +0000 UTC m=+0.068998439 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, version=2.2.4, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, com.redhat.component=keepalived-container, description=keepalived for Ceph, distribution-scope=public, release=1793, io.buildah.version=1.28.2, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, architecture=x86_64)
Mar  1 04:46:55 np0005634532 podman[111488]: 2026-03-01 09:46:55.007416777 +0000 UTC m=+0.083055797 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, architecture=x86_64, com.redhat.component=keepalived-container, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.buildah.version=1.28.2, version=2.2.4, vendor=Red Hat, Inc., release=1793, build-date=2023-02-22T09:23:20, io.openshift.tags=Ceph keepalived, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, vcs-type=git, io.openshift.expose-services=, name=keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Keepalived on RHEL 9, description=keepalived for Ceph, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793)
Mar  1 04:46:55 np0005634532 podman[111553]: 2026-03-01 09:46:55.217430007 +0000 UTC m=+0.052652145 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:55 np0005634532 podman[111553]: 2026-03-01 09:46:55.246408744 +0000 UTC m=+0.081630892 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:55 np0005634532 podman[111697]: 2026-03-01 09:46:55.439543436 +0000 UTC m=+0.048104902 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:46:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v113: 353 pgs: 1 active+remapped, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 169 B/s rd, 0 op/s; 18 B/s, 0 objects/s recovering
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0)
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Mar  1 04:46:55 np0005634532 podman[111697]: 2026-03-01 09:46:55.62752089 +0000 UTC m=+0.236082336 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:46:55 np0005634532 python3.9[111785]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Mar  1 04:46:55 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Mar  1 04:46:56 np0005634532 podman[111917]: 2026-03-01 09:46:56.032022324 +0000 UTC m=+0.072655279 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:46:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:56.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:46:56 np0005634532 podman[111917]: 2026-03-01 09:46:56.066277133 +0000 UTC m=+0.106910098 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:56.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:56 np0005634532 python3.9[112089]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v116: 353 pgs: 1 active+remapped, 352 active+clean; 458 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:46:56 np0005634532 python3.9[112220]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:46:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094656 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:46:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:46:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:46:56.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:46:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:46:56.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:46:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:46:56.955Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:46:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:57] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Mar  1 04:46:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:46:57] "GET /metrics HTTP/1.1" 200 48257 "" "Prometheus/2.51.0"
Mar  1 04:46:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Mar  1 04:46:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Mar  1 04:46:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Mar  1 04:46:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Mar  1 04:46:57 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:46:57 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.356086965 +0000 UTC m=+0.049482876 container create d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:46:57 np0005634532 systemd[1]: Started libpod-conmon-d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160.scope.
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.331862745 +0000 UTC m=+0.025258626 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:46:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.452788079 +0000 UTC m=+0.146183950 container init d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.46129732 +0000 UTC m=+0.154693191 container start d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.464566071 +0000 UTC m=+0.157961932 container attach d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 04:46:57 np0005634532 hardcore_edison[112356]: 167 167
Mar  1 04:46:57 np0005634532 systemd[1]: libpod-d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160.scope: Deactivated successfully.
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.468769135 +0000 UTC m=+0.162164996 container died d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:46:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e6d268009cddd05a17e17ad789c7778433f0c699e654518d60405d7098748236-merged.mount: Deactivated successfully.
Mar  1 04:46:57 np0005634532 podman[112339]: 2026-03-01 09:46:57.514179889 +0000 UTC m=+0.207575770 container remove d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:46:57 np0005634532 systemd[1]: libpod-conmon-d4ec0a27212c0a556fcec02955cd9391a7e6826b979280f66f4e9533fd4b9160.scope: Deactivated successfully.
Mar  1 04:46:57 np0005634532 podman[112382]: 2026-03-01 09:46:57.651614102 +0000 UTC m=+0.040094754 container create d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:46:57 np0005634532 systemd[1]: Started libpod-conmon-d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd.scope.
Mar  1 04:46:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:46:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:57 np0005634532 podman[112382]: 2026-03-01 09:46:57.631551625 +0000 UTC m=+0.020032277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:46:57 np0005634532 podman[112382]: 2026-03-01 09:46:57.754460198 +0000 UTC m=+0.142940900 container init d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:46:57 np0005634532 podman[112382]: 2026-03-01 09:46:57.76989982 +0000 UTC m=+0.158380502 container start d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:46:57 np0005634532 podman[112382]: 2026-03-01 09:46:57.773859568 +0000 UTC m=+0.162340230 container attach d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 04:46:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:46:58.049 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:58 np0005634532 compassionate_raman[112398]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:46:58 np0005634532 compassionate_raman[112398]: --> All data devices are unavailable
Mar  1 04:46:58 np0005634532 systemd[1]: libpod-d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd.scope: Deactivated successfully.
Mar  1 04:46:58 np0005634532 podman[112382]: 2026-03-01 09:46:58.097890981 +0000 UTC m=+0.486371633 container died d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:46:58 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7ab3578deb7d141e999063a26ea475aa07a14ef80bb929142fc7272406ab5929-merged.mount: Deactivated successfully.
Mar  1 04:46:58 np0005634532 podman[112382]: 2026-03-01 09:46:58.138899876 +0000 UTC m=+0.527380528 container remove d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:46:58 np0005634532 systemd[1]: libpod-conmon-d07842a86a005a2ee765cb5dbc9f0a64e47280726d0a4fd6ff046ae206fc35bd.scope: Deactivated successfully.
Mar  1 04:46:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Mar  1 04:46:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Mar  1 04:46:58 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Mar  1 04:46:58 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:46:58 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:46:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Mar  1 04:46:58 np0005634532 ceph-mon[75825]: Health check failed: 1 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Mar  1 04:46:58 np0005634532 python3.9[112543]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:46:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:46:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:46:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:46:58.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.733523988 +0000 UTC m=+0.048205195 container create e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:46:58 np0005634532 systemd[1]: Started libpod-conmon-e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb.scope.
Mar  1 04:46:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v119: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s; 66 B/s, 1 objects/s recovering
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.711346729 +0000 UTC m=+0.026027986 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:46:58 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.828691694 +0000 UTC m=+0.143372951 container init e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.840584018 +0000 UTC m=+0.155265235 container start e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.844917916 +0000 UTC m=+0.159599183 container attach e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 04:46:58 np0005634532 musing_heyrovsky[112687]: 167 167
Mar  1 04:46:58 np0005634532 systemd[1]: libpod-e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb.scope: Deactivated successfully.
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.847746186 +0000 UTC m=+0.162427383 container died e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:46:58 np0005634532 systemd[1]: var-lib-containers-storage-overlay-292253fd83025dc217da2638a11e1122a5351e1c4236fdb857a8e656cb4f8b9c-merged.mount: Deactivated successfully.
Mar  1 04:46:58 np0005634532 podman[112671]: 2026-03-01 09:46:58.885510111 +0000 UTC m=+0.200191288 container remove e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_heyrovsky, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:46:58 np0005634532 systemd[1]: libpod-conmon-e42aad675d7fb28f11d29155eea44c69c93a43f456b44402b40457aa18c898bb.scope: Deactivated successfully.
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:58.999163875 +0000 UTC m=+0.032467085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:46:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.28494339 +0000 UTC m=+0.318246520 container create 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:46:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Mar  1 04:46:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Mar  1 04:46:59 np0005634532 systemd[1]: Started libpod-conmon-31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494.scope.
Mar  1 04:46:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:46:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c23665b25a027f10d4dec52a42ab69dd2e07e75c802425bdefb76b14769130c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c23665b25a027f10d4dec52a42ab69dd2e07e75c802425bdefb76b14769130c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c23665b25a027f10d4dec52a42ab69dd2e07e75c802425bdefb76b14769130c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c23665b25a027f10d4dec52a42ab69dd2e07e75c802425bdefb76b14769130c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.398602184 +0000 UTC m=+0.431905334 container init 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.411076203 +0000 UTC m=+0.444379363 container start 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.417340838 +0000 UTC m=+0.450643968 container attach 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:46:59 np0005634532 python3.9[112854]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Mar  1 04:46:59 np0005634532 festive_curie[112857]: {
Mar  1 04:46:59 np0005634532 festive_curie[112857]:    "0": [
Mar  1 04:46:59 np0005634532 festive_curie[112857]:        {
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "devices": [
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "/dev/loop3"
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            ],
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "lv_name": "ceph_lv0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "lv_size": "21470642176",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "name": "ceph_lv0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "tags": {
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.cluster_name": "ceph",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.crush_device_class": "",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.encrypted": "0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.osd_id": "0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.type": "block",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.vdo": "0",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:                "ceph.with_tpm": "0"
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            },
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "type": "block",
Mar  1 04:46:59 np0005634532 festive_curie[112857]:            "vg_name": "ceph_vg0"
Mar  1 04:46:59 np0005634532 festive_curie[112857]:        }
Mar  1 04:46:59 np0005634532 festive_curie[112857]:    ]
Mar  1 04:46:59 np0005634532 festive_curie[112857]: }
Mar  1 04:46:59 np0005634532 systemd[1]: libpod-31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494.scope: Deactivated successfully.
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.711174073 +0000 UTC m=+0.744477233 container died 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:46:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1c23665b25a027f10d4dec52a42ab69dd2e07e75c802425bdefb76b14769130c-merged.mount: Deactivated successfully.
Mar  1 04:46:59 np0005634532 podman[112763]: 2026-03-01 09:46:59.770673686 +0000 UTC m=+0.803976846 container remove 31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_curie, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:46:59 np0005634532 systemd[1]: libpod-conmon-31ce3be6aedd11a5c6da6b34e477ee5cf411cbadedaa6f9d9a145fcfc736a494.scope: Deactivated successfully.
Mar  1 04:47:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:00.051 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:00 np0005634532 python3.9[113085]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Mar  1 04:47:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Mar  1 04:47:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Mar  1 04:47:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Mar  1 04:47:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:47:00 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:47:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:00.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.37064445 +0000 UTC m=+0.042310908 container create e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 04:47:00 np0005634532 systemd[1]: Started libpod-conmon-e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71.scope.
Mar  1 04:47:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.353091136 +0000 UTC m=+0.024757494 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.450842776 +0000 UTC m=+0.122509184 container init e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.457317836 +0000 UTC m=+0.128984234 container start e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.46109918 +0000 UTC m=+0.132765578 container attach e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:47:00 np0005634532 hopeful_bose[113169]: 167 167
Mar  1 04:47:00 np0005634532 systemd[1]: libpod-e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71.scope: Deactivated successfully.
Mar  1 04:47:00 np0005634532 conmon[113169]: conmon e1c6e0427fcabbf46376 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71.scope/container/memory.events
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.464894304 +0000 UTC m=+0.136560702 container died e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:47:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8aa8ca5a49bed7a9236447536065bd2f082f62ae4f5d156025e9ed59859e5312-merged.mount: Deactivated successfully.
Mar  1 04:47:00 np0005634532 podman[113129]: 2026-03-01 09:47:00.508951424 +0000 UTC m=+0.180617812 container remove e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:47:00 np0005634532 systemd[1]: libpod-conmon-e1c6e0427fcabbf46376e67f2cc908e1d05e0aaa38605e938a88de53cd567b71.scope: Deactivated successfully.
Mar  1 04:47:00 np0005634532 podman[113192]: 2026-03-01 09:47:00.627503569 +0000 UTC m=+0.047558239 container create c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:47:00 np0005634532 systemd[1]: Started libpod-conmon-c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2.scope.
Mar  1 04:47:00 np0005634532 podman[113192]: 2026-03-01 09:47:00.603161936 +0000 UTC m=+0.023216686 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:47:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:47:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4daa9c35c3bd8ffdc602b0673a8f6826548b047db76b8a487dad896734115948/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4daa9c35c3bd8ffdc602b0673a8f6826548b047db76b8a487dad896734115948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4daa9c35c3bd8ffdc602b0673a8f6826548b047db76b8a487dad896734115948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4daa9c35c3bd8ffdc602b0673a8f6826548b047db76b8a487dad896734115948/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:00 np0005634532 podman[113192]: 2026-03-01 09:47:00.722337506 +0000 UTC m=+0.142392176 container init c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:47:00 np0005634532 podman[113192]: 2026-03-01 09:47:00.72774036 +0000 UTC m=+0.147795040 container start c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:47:00 np0005634532 podman[113192]: 2026-03-01 09:47:00.731902033 +0000 UTC m=+0.151956703 container attach c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:47:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v122: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s; 54 B/s, 1 objects/s recovering
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:01 np0005634532 python3.9[113378]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Mar  1 04:47:01 np0005634532 lvm[113434]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:47:01 np0005634532 lvm[113434]: VG ceph_vg0 finished
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Mar  1 04:47:01 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:47:01 np0005634532 compassionate_swanson[113238]: {}
Mar  1 04:47:01 np0005634532 systemd[1]: libpod-c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2.scope: Deactivated successfully.
Mar  1 04:47:01 np0005634532 systemd[1]: libpod-c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2.scope: Consumed 1.002s CPU time.
Mar  1 04:47:01 np0005634532 podman[113192]: 2026-03-01 09:47:01.460579874 +0000 UTC m=+0.880634614 container died c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:47:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4daa9c35c3bd8ffdc602b0673a8f6826548b047db76b8a487dad896734115948-merged.mount: Deactivated successfully.
Mar  1 04:47:01 np0005634532 podman[113192]: 2026-03-01 09:47:01.507593038 +0000 UTC m=+0.927647728 container remove c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_swanson, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:47:01 np0005634532 systemd[1]: libpod-conmon-c029c5af1601a6819b615d206c5bc4888ac20ccd043efda2b5fe52a5f9e42bf2.scope: Deactivated successfully.
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:47:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:47:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:02.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:47:02 np0005634532 python3.9[113605]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Mar  1 04:47:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:02.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:47:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:47:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v124: 353 pgs: 1 remapped+peering, 1 peering, 351 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 221 B/s rd, 0 op/s; 47 B/s, 1 objects/s recovering
Mar  1 04:47:03 np0005634532 python3.9[113758]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:03 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 1.
Mar  1 04:47:03 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:47:03 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.458s CPU time.
Mar  1 04:47:03 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:47:03 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:47:03 np0005634532 podman[113809]: 2026-03-01 09:47:03.548812745 +0000 UTC m=+0.043439087 container create 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:47:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab43381e7b5a515a778ff6e7a708dce7669ee1ea8ef1aa8ac685a518ea738c/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab43381e7b5a515a778ff6e7a708dce7669ee1ea8ef1aa8ac685a518ea738c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab43381e7b5a515a778ff6e7a708dce7669ee1ea8ef1aa8ac685a518ea738c/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1ab43381e7b5a515a778ff6e7a708dce7669ee1ea8ef1aa8ac685a518ea738c/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:47:03 np0005634532 podman[113809]: 2026-03-01 09:47:03.609928648 +0000 UTC m=+0.104555000 container init 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:47:03 np0005634532 podman[113809]: 2026-03-01 09:47:03.614270125 +0000 UTC m=+0.108896467 container start 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:47:03 np0005634532 bash[113809]: 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85
Mar  1 04:47:03 np0005634532 podman[113809]: 2026-03-01 09:47:03.529854155 +0000 UTC m=+0.024480527 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:47:03 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:47:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:47:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:04.057 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:04.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v125: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0)
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Mar  1 04:47:04 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Mar  1 04:47:04 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:47:05 np0005634532 python3.9[114021]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:47:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Mar  1 04:47:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Mar  1 04:47:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Mar  1 04:47:05 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Mar  1 04:47:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:47:05 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 04:47:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:06.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:06 np0005634532 python3.9[114177]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:47:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:06.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:06 np0005634532 python3.9[114257]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:47:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v128: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s; 0 B/s, 0 objects/s recovering
Mar  1 04:47:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Mar  1 04:47:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Mar  1 04:47:06 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Mar  1 04:47:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:06.955Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:47:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:06.956Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:47:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:06.956Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:47:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:47:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:07] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:47:07 np0005634532 python3.9[114410]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:47:07 np0005634532 python3.9[114489]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:47:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Mar  1 04:47:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Mar  1 04:47:07 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Mar  1 04:47:07 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 04:47:07 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 04:47:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:08.063 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:08.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v131: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.7 KiB/s wr, 5 op/s; 27 B/s, 1 objects/s recovering
Mar  1 04:47:08 np0005634532 python3.9[114644]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Mar  1 04:47:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Mar  1 04:47:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Mar  1 04:47:09 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 04:47:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:09 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:47:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:09 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:47:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:47:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:10.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:47:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:10.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v133: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.4 KiB/s wr, 4 op/s; 22 B/s, 1 objects/s recovering
Mar  1 04:47:10 np0005634532 python3.9[114822]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:47:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:11 np0005634532 python3.9[114974]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Mar  1 04:47:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:12.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:12.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:12 np0005634532 python3.9[115126]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:47:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v134: 353 pgs: 1 peering, 352 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1.2 KiB/s wr, 3 op/s; 18 B/s, 0 objects/s recovering
Mar  1 04:47:13 np0005634532 python3.9[115279]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:47:13 np0005634532 systemd[1]: Stopping Dynamic System Tuning Daemon...
Mar  1 04:47:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:14.073 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:14 np0005634532 systemd[1]: tuned.service: Deactivated successfully.
Mar  1 04:47:14 np0005634532 systemd[1]: Stopped Dynamic System Tuning Daemon.
Mar  1 04:47:14 np0005634532 systemd[1]: Starting Dynamic System Tuning Daemon...
Mar  1 04:47:14 np0005634532 systemd[1]: Started Dynamic System Tuning Daemon.
Mar  1 04:47:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:14.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v135: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 1.5 KiB/s wr, 5 op/s; 13 B/s, 0 objects/s recovering
Mar  1 04:47:14 np0005634532 python3.9[115442]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:47:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:47:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:16.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:16.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:16 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v136: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.4 KiB/s wr, 4 op/s; 12 B/s, 0 objects/s recovering
Mar  1 04:47:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:16 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:16.957Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:47:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:17] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:47:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:17 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:47:17
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', '.rgw.root', '.nfs', 'vms', 'backups', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.meta']
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:47:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:47:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:47:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=infra.usagestats t=2026-03-01T09:47:17.558044615Z level=info msg="Usage stats are ready to report"
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:47:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:47:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:47:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:18.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:47:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:47:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:18.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:47:18 np0005634532 python3.9[115615]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:47:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:18 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v137: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 511 B/s wr, 2 op/s
Mar  1 04:47:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094718 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:47:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:18 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:19 np0005634532 python3.9[115770]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:47:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:19 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:19 np0005634532 systemd[1]: session-39.scope: Deactivated successfully.
Mar  1 04:47:19 np0005634532 systemd[1]: session-39.scope: Consumed 1min 4.603s CPU time.
Mar  1 04:47:19 np0005634532 systemd-logind[832]: Session 39 logged out. Waiting for processes to exit.
Mar  1 04:47:19 np0005634532 systemd-logind[832]: Removed session 39.
Mar  1 04:47:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:20.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:20.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:20 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v138: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 432 B/s wr, 2 op/s
Mar  1 04:47:20 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Mar  1 04:47:20 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:20.907675) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:47:20 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Mar  1 04:47:20 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358440907749, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 2776, "num_deletes": 251, "total_data_size": 5463574, "memory_usage": 5544368, "flush_reason": "Manual Compaction"}
Mar  1 04:47:20 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Mar  1 04:47:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:20 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358441023415, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 5090932, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8016, "largest_seqno": 10791, "table_properties": {"data_size": 5077579, "index_size": 8695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3589, "raw_key_size": 31182, "raw_average_key_size": 21, "raw_value_size": 5049300, "raw_average_value_size": 3560, "num_data_blocks": 379, "num_entries": 1418, "num_filter_entries": 1418, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358322, "oldest_key_time": 1772358322, "file_creation_time": 1772358440, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 115814 microseconds, and 9596 cpu microseconds.
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.023495) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 5090932 bytes OK
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.023525) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.031290) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.031321) EVENT_LOG_v1 {"time_micros": 1772358441031313, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.031348) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 5451043, prev total WAL file size 5451747, number of live WAL files 2.
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.032703) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(4971KB)], [23(13MB)]
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358441032757, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 18923638, "oldest_snapshot_seqno": -1}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:21 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 4037 keys, 14428141 bytes, temperature: kUnknown
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358441263282, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 14428141, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14395533, "index_size": 21426, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10117, "raw_key_size": 103116, "raw_average_key_size": 25, "raw_value_size": 14315944, "raw_average_value_size": 3546, "num_data_blocks": 920, "num_entries": 4037, "num_filter_entries": 4037, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772358441, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.263606) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 14428141 bytes
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.274075) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.1 rd, 62.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.9, 13.2 +0.0 blob) out(13.8 +0.0 blob), read-write-amplify(6.6) write-amplify(2.8) OK, records in: 4568, records dropped: 531 output_compression: NoCompression
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.274127) EVENT_LOG_v1 {"time_micros": 1772358441274107, "job": 8, "event": "compaction_finished", "compaction_time_micros": 230618, "compaction_time_cpu_micros": 23466, "output_level": 6, "num_output_files": 1, "total_output_size": 14428141, "num_input_records": 4568, "num_output_records": 4037, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358441275226, "job": 8, "event": "table_file_deletion", "file_number": 25}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358441277307, "job": 8, "event": "table_file_deletion", "file_number": 23}
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.032641) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.277452) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.277465) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.277470) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.277475) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:21 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:47:21.277480) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:47:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:22.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:22.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:22 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v139: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:47:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:22 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:23 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:24.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:24.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:24 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v140: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:47:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:24 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:25 np0005634532 systemd-logind[832]: New session 40 of user zuul.
Mar  1 04:47:25 np0005634532 systemd[1]: Started Session 40 of User zuul.
Mar  1 04:47:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:25 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6540016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:26.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:26 np0005634532 python3.9[115957]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:47:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:26.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:26 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6580016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v141: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:47:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:26 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:26.958Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:47:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:47:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:47:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:27 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:27 np0005634532 python3.9[116115]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Mar  1 04:47:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:28.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:28.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:28 np0005634532 python3.9[116271]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:47:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:28 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v142: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:47:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:28 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:29 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:29 np0005634532 python3.9[116356]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Mar  1 04:47:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:30.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:30.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:30 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v143: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:47:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:30 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:31 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:31 np0005634532 python3.9[116537]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:32.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:32.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:47:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:47:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:32 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v144: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:47:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:32 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:33 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:34 np0005634532 python3.9[116694]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:47:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:34.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000050s ======
Mar  1 04:47:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:34.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Mar  1 04:47:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:34 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654003820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v145: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:47:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:34 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:34 np0005634532 python3.9[116850]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:47:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:35 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094735 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:47:35 np0005634532 python3.9[117003]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Mar  1 04:47:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:36.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:36 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v146: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:47:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:36 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:36.959Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:47:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:36.959Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:47:37 np0005634532 python3.9[117155]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:47:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:37] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Mar  1 04:47:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:37] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Mar  1 04:47:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:37 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:38 np0005634532 python3.9[117315]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:38.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:38 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v147: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:47:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:38 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:39 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:40.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:40 np0005634532 python3.9[117471]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:47:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:40.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:40 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v148: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:47:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:40 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:41 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:41 np0005634532 python3.9[117760]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Mar  1 04:47:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:42.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:42.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:42 np0005634532 python3.9[117912]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:47:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:42 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680016e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v149: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:47:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:42 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:43 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:43 np0005634532 python3.9[118067]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:44.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:44.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:44 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:44 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:47:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v150: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:47:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:44 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:45 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:45 np0005634532 python3.9[118226]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:47:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:46.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:46.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:46 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v151: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:47:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:46 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:46.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:47:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:46.960Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:47:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:46.960Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:47:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:47] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:47] "GET /metrics HTTP/1.1" 200 48332 "" "Prometheus/2.51.0"
Mar  1 04:47:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:47 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:47:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:47:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:47:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:47 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:47:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:47 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:47:47 np0005634532 python3.9[118382]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:47:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:48.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:48.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:48 np0005634532 python3.9[118539]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Mar  1 04:47:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:48 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v152: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:47:48 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:37454] [POST] [200] [0.002s] [4.0B] [20817af2-5b82-457a-867b-2a80c00c18bb] /api/prometheus_receiver
Mar  1 04:47:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:48 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:49 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:49 np0005634532 systemd[1]: session-40.scope: Deactivated successfully.
Mar  1 04:47:49 np0005634532 systemd[1]: session-40.scope: Consumed 18.066s CPU time.
Mar  1 04:47:49 np0005634532 systemd-logind[832]: Session 40 logged out. Waiting for processes to exit.
Mar  1 04:47:49 np0005634532 systemd-logind[832]: Removed session 40.
Mar  1 04:47:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:50.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:50.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:50 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:50 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:47:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v153: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:47:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:50 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:51 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:52.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:52.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:52 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v154: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:47:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:52 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:53 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c0020a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:54.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:54.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:54 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v155: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:47:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:54 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:55 np0005634532 systemd-logind[832]: New session 41 of user zuul.
Mar  1 04:47:55 np0005634532 systemd[1]: Started Session 41 of User zuul.
Mar  1 04:47:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:55 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:47:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:56.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:56 np0005634532 python3.9[118751]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:47:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:56.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:56 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c0020a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v156: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:47:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:56 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:56.961Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:47:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:56.962Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:47:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:57] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:47:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:47:57] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:47:57 np0005634532 python3.9[118906]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:47:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:57 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094757 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:47:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:47:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:47:58.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:47:58 np0005634532 python3.9[119101]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:47:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:47:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:47:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:47:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:47:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:58 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v157: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:47:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:47:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:47:58 np0005634532 systemd[1]: session-41.scope: Deactivated successfully.
Mar  1 04:47:58 np0005634532 systemd[1]: session-41.scope: Consumed 2.303s CPU time.
Mar  1 04:47:58 np0005634532 systemd-logind[832]: Session 41 logged out. Waiting for processes to exit.
Mar  1 04:47:58 np0005634532 systemd-logind[832]: Removed session 41.
Mar  1 04:47:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:58 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c002240 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:47:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:47:59 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:00.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:00.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:00 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v158: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:48:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:00 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:01 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c0023e0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:02.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:02.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:48:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:48:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:02 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v159: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:48:02 np0005634532 podman[119254]: 2026-03-01 09:48:02.944696407 +0000 UTC m=+0.426420076 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:48:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:02 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:03 np0005634532 podman[119254]: 2026-03-01 09:48:03.036406381 +0000 UTC m=+0.518130040 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:48:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:03 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:04 np0005634532 podman[119389]: 2026-03-01 09:48:04.014964804 +0000 UTC m=+0.068605475 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:04 np0005634532 podman[119389]: 2026-03-01 09:48:04.024345875 +0000 UTC m=+0.077986536 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:04.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:04 np0005634532 podman[119466]: 2026-03-01 09:48:04.303839604 +0000 UTC m=+0.066890762 container exec 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:48:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:04.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:04 np0005634532 podman[119487]: 2026-03-01 09:48:04.578197125 +0000 UTC m=+0.252970284 container exec_died 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:48:04 np0005634532 podman[119466]: 2026-03-01 09:48:04.620746906 +0000 UTC m=+0.383797964 container exec_died 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 04:48:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:04 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v160: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:48:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:04 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:05 np0005634532 systemd-logind[832]: New session 42 of user zuul.
Mar  1 04:48:05 np0005634532 systemd[1]: Started Session 42 of User zuul.
Mar  1 04:48:05 np0005634532 podman[119533]: 2026-03-01 09:48:05.141849556 +0000 UTC m=+0.251289212 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:48:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:05 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:05 np0005634532 podman[119533]: 2026-03-01 09:48:05.396693866 +0000 UTC m=+0.506133453 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:48:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:06.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:06 np0005634532 python3.9[119720]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:48:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:06.426 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:06 np0005634532 podman[119759]: 2026-03-01 09:48:06.547357627 +0000 UTC m=+0.223547658 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, vcs-type=git, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.openshift.tags=Ceph keepalived, version=2.2.4, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, release=1793, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=keepalived-container, name=keepalived)
Mar  1 04:48:06 np0005634532 podman[119759]: 2026-03-01 09:48:06.58999061 +0000 UTC m=+0.266180681 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1793, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, build-date=2023-02-22T09:23:20, description=keepalived for Ceph, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.buildah.version=1.28.2, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., version=2.2.4, com.redhat.component=keepalived-container, name=keepalived, vcs-type=git, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Mar  1 04:48:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:06 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:06 np0005634532 podman[119933]: 2026-03-01 09:48:06.81972146 +0000 UTC m=+0.048305223 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v161: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:48:06 np0005634532 podman[119933]: 2026-03-01 09:48:06.845947747 +0000 UTC m=+0.074531480 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:06 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c009990 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:06.963Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:07] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:48:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:07] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:48:07 np0005634532 podman[120052]: 2026-03-01 09:48:07.078333603 +0000 UTC m=+0.069224940 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:48:07 np0005634532 python3.9[120000]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:48:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:07 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:07 np0005634532 podman[120052]: 2026-03-01 09:48:07.269792199 +0000 UTC m=+0.260683446 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:48:07 np0005634532 podman[120243]: 2026-03-01 09:48:07.667549996 +0000 UTC m=+0.061147670 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:07 np0005634532 podman[120243]: 2026-03-01 09:48:07.701405092 +0000 UTC m=+0.095002766 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:48:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:48:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:48:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 python3.9[120384]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:48:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:08.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:48:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v162: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:48:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:08.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:08 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Mar  1 04:48:08 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : Cluster is now healthy
Mar  1 04:48:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:08.846Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:48:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:08.847Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:08 np0005634532 podman[120624]: 2026-03-01 09:48:08.908468924 +0000 UTC m=+0.036412080 container create d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:48:08 np0005634532 systemd[1]: Started libpod-conmon-d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602.scope.
Mar  1 04:48:08 np0005634532 python3.9[120580]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:48:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:08 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:08 np0005634532 podman[120624]: 2026-03-01 09:48:08.892182942 +0000 UTC m=+0.020125878 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:08 np0005634532 podman[120624]: 2026-03-01 09:48:08.993160884 +0000 UTC m=+0.121103830 container init d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:48:09 np0005634532 podman[120624]: 2026-03-01 09:48:09.001396267 +0000 UTC m=+0.129339183 container start d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:48:09 np0005634532 podman[120624]: 2026-03-01 09:48:09.00514672 +0000 UTC m=+0.133089656 container attach d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:48:09 np0005634532 competent_mestorf[120641]: 167 167
Mar  1 04:48:09 np0005634532 systemd[1]: libpod-d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602.scope: Deactivated successfully.
Mar  1 04:48:09 np0005634532 podman[120624]: 2026-03-01 09:48:09.006860592 +0000 UTC m=+0.134803518 container died d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:48:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c0fc54b05950d0492ad2c1e81b980d5fe6ae0f169b92a87b05692cfb9b0ca36e-merged.mount: Deactivated successfully.
Mar  1 04:48:09 np0005634532 podman[120624]: 2026-03-01 09:48:09.053962645 +0000 UTC m=+0.181905591 container remove d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_mestorf, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:48:09 np0005634532 systemd[1]: libpod-conmon-d13c3066e68567a0bb10ca20a0951c68d7c83f705fe8e71a0666839c1c53e602.scope: Deactivated successfully.
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.218054805 +0000 UTC m=+0.046797246 container create 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:48:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:09 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:09 np0005634532 systemd[1]: Started libpod-conmon-95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8.scope.
Mar  1 04:48:09 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.196341889 +0000 UTC m=+0.025084350 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:09 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.317216112 +0000 UTC m=+0.145958573 container init 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.326939262 +0000 UTC m=+0.155681733 container start 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.330957362 +0000 UTC m=+0.159699813 container attach 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:48:09 np0005634532 tender_sinoussi[120684]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:48:09 np0005634532 tender_sinoussi[120684]: --> All data devices are unavailable
Mar  1 04:48:09 np0005634532 systemd[1]: libpod-95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8.scope: Deactivated successfully.
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.674527412 +0000 UTC m=+0.503269873 container died 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:48:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-248da2e2dc888023c734f3046c313c56912ffaa53b4d8c925d94c8a0d697f7d1-merged.mount: Deactivated successfully.
Mar  1 04:48:09 np0005634532 podman[120668]: 2026-03-01 09:48:09.724767602 +0000 UTC m=+0.553510053 container remove 95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_sinoussi, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:48:09 np0005634532 systemd[1]: libpod-conmon-95093dd817fc90c98758a7ef26b3ff515de25bfcf7296817124f0931dfcab4f8.scope: Deactivated successfully.
Mar  1 04:48:09 np0005634532 ceph-mon[75825]: Health check cleared: CEPHADM_FAILED_DAEMON (was: 1 failed cephadm daemon(s))
Mar  1 04:48:09 np0005634532 ceph-mon[75825]: Cluster is now healthy
Mar  1 04:48:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:10.164 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.329103018 +0000 UTC m=+0.065790615 container create 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:48:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v163: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Mar  1 04:48:10 np0005634532 systemd[1]: Started libpod-conmon-10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0.scope.
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.300715697 +0000 UTC m=+0.037403344 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:10 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:10.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.435190686 +0000 UTC m=+0.171878293 container init 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.445284075 +0000 UTC m=+0.181971642 container start 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.44950621 +0000 UTC m=+0.186193817 container attach 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:48:10 np0005634532 naughty_nobel[120821]: 167 167
Mar  1 04:48:10 np0005634532 systemd[1]: libpod-10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0.scope: Deactivated successfully.
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.451780966 +0000 UTC m=+0.188468523 container died 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:48:10 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b65e580b42ed9aadfd930b2777d4b6ec7ec63dda56a21849017aff2e3b2fca41-merged.mount: Deactivated successfully.
Mar  1 04:48:10 np0005634532 podman[120802]: 2026-03-01 09:48:10.488183734 +0000 UTC m=+0.224871291 container remove 10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:48:10 np0005634532 systemd[1]: libpod-conmon-10131ee91c14560d69f9d0afd8b73ceea2221ad14739c67115c1c0f5ffeab9e0.scope: Deactivated successfully.
Mar  1 04:48:10 np0005634532 podman[120943]: 2026-03-01 09:48:10.649216539 +0000 UTC m=+0.044875149 container create 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:48:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:10 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:10 np0005634532 systemd[1]: Started libpod-conmon-442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718.scope.
Mar  1 04:48:10 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1264bf05c077705a7e4ecfbb7b36288e81dab49dabcd8c0de5d605e689c76bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1264bf05c077705a7e4ecfbb7b36288e81dab49dabcd8c0de5d605e689c76bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1264bf05c077705a7e4ecfbb7b36288e81dab49dabcd8c0de5d605e689c76bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1264bf05c077705a7e4ecfbb7b36288e81dab49dabcd8c0de5d605e689c76bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:10 np0005634532 podman[120943]: 2026-03-01 09:48:10.722505598 +0000 UTC m=+0.118164228 container init 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:48:10 np0005634532 podman[120943]: 2026-03-01 09:48:10.629608475 +0000 UTC m=+0.025267095 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:10 np0005634532 podman[120943]: 2026-03-01 09:48:10.732164726 +0000 UTC m=+0.127823366 container start 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:48:10 np0005634532 podman[120943]: 2026-03-01 09:48:10.737499328 +0000 UTC m=+0.133157928 container attach 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:48:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:10 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:11 np0005634532 jolly_moore[120984]: {
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:    "0": [
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:        {
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "devices": [
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "/dev/loop3"
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            ],
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "lv_name": "ceph_lv0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "lv_size": "21470642176",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "name": "ceph_lv0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "tags": {
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.cluster_name": "ceph",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.crush_device_class": "",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.encrypted": "0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.osd_id": "0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.type": "block",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.vdo": "0",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:                "ceph.with_tpm": "0"
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            },
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "type": "block",
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:            "vg_name": "ceph_vg0"
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:        }
Mar  1 04:48:11 np0005634532 jolly_moore[120984]:    ]
Mar  1 04:48:11 np0005634532 jolly_moore[120984]: }
Mar  1 04:48:11 np0005634532 systemd[1]: libpod-442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718.scope: Deactivated successfully.
Mar  1 04:48:11 np0005634532 podman[120943]: 2026-03-01 09:48:11.042168058 +0000 UTC m=+0.437826678 container died 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:48:11 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b1264bf05c077705a7e4ecfbb7b36288e81dab49dabcd8c0de5d605e689c76bd-merged.mount: Deactivated successfully.
Mar  1 04:48:11 np0005634532 podman[120943]: 2026-03-01 09:48:11.098234912 +0000 UTC m=+0.493893522 container remove 442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_moore, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:48:11 np0005634532 systemd[1]: libpod-conmon-442204bf44d1e19ae8f5e871791f3d93fa48cb730b5134b8e3b1813dbf465718.scope: Deactivated successfully.
Mar  1 04:48:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:11 np0005634532 python3.9[121042]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:48:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:11 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:11 np0005634532 podman[121218]: 2026-03-01 09:48:11.629154616 +0000 UTC m=+0.023350667 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094811 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:48:11 np0005634532 podman[121218]: 2026-03-01 09:48:11.76179304 +0000 UTC m=+0.155989071 container create eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:48:11 np0005634532 systemd[1]: Started libpod-conmon-eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a.scope.
Mar  1 04:48:11 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:12 np0005634532 podman[121218]: 2026-03-01 09:48:12.122805289 +0000 UTC m=+0.517001340 container init eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:48:12 np0005634532 podman[121218]: 2026-03-01 09:48:12.130207352 +0000 UTC m=+0.524403383 container start eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:48:12 np0005634532 systemd[1]: libpod-eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a.scope: Deactivated successfully.
Mar  1 04:48:12 np0005634532 stupefied_rhodes[121291]: 167 167
Mar  1 04:48:12 np0005634532 conmon[121291]: conmon eafb2794925b6277836c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a.scope/container/memory.events
Mar  1 04:48:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:12.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:12 np0005634532 podman[121218]: 2026-03-01 09:48:12.256212842 +0000 UTC m=+0.650408903 container attach eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:48:12 np0005634532 podman[121218]: 2026-03-01 09:48:12.257841092 +0000 UTC m=+0.652037163 container died eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:48:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v164: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Mar  1 04:48:12 np0005634532 python3.9[121373]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:12.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:12 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:12 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5e6964dc800f8c3b724caaf5cf26b95d8e59d42f912bde7a8c146327a3622238-merged.mount: Deactivated successfully.
Mar  1 04:48:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:12 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:13 np0005634532 podman[121218]: 2026-03-01 09:48:13.00284051 +0000 UTC m=+1.397036541 container remove eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stupefied_rhodes, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:48:13 np0005634532 systemd[1]: libpod-conmon-eafb2794925b6277836cf2ddace94a74f177c4570f61290dad4bd783a43a045a.scope: Deactivated successfully.
Mar  1 04:48:13 np0005634532 python3.9[121538]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:48:13 np0005634532 podman[121546]: 2026-03-01 09:48:13.157937279 +0000 UTC m=+0.062067293 container create 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 04:48:13 np0005634532 systemd[1]: Started libpod-conmon-507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024.scope.
Mar  1 04:48:13 np0005634532 podman[121546]: 2026-03-01 09:48:13.126934743 +0000 UTC m=+0.031064837 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:48:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:48:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c14d0b55cba2dc5417fd01abcd38193dc120fc8cddbe0ea29f521c0ef4f4f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c14d0b55cba2dc5417fd01abcd38193dc120fc8cddbe0ea29f521c0ef4f4f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c14d0b55cba2dc5417fd01abcd38193dc120fc8cddbe0ea29f521c0ef4f4f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c14d0b55cba2dc5417fd01abcd38193dc120fc8cddbe0ea29f521c0ef4f4f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:48:13 np0005634532 podman[121546]: 2026-03-01 09:48:13.26293944 +0000 UTC m=+0.167069494 container init 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True)
Mar  1 04:48:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:13 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:13 np0005634532 podman[121546]: 2026-03-01 09:48:13.272468655 +0000 UTC m=+0.176598690 container start 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:48:13 np0005634532 podman[121546]: 2026-03-01 09:48:13.276736711 +0000 UTC m=+0.180866735 container attach 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:48:13 np0005634532 lvm[121804]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:48:13 np0005634532 lvm[121804]: VG ceph_vg0 finished
Mar  1 04:48:14 np0005634532 gifted_gates[121576]: {}
Mar  1 04:48:14 np0005634532 systemd[1]: libpod-507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024.scope: Deactivated successfully.
Mar  1 04:48:14 np0005634532 systemd[1]: libpod-507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024.scope: Consumed 1.122s CPU time.
Mar  1 04:48:14 np0005634532 podman[121546]: 2026-03-01 09:48:14.09145269 +0000 UTC m=+0.995582714 container died 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:48:14 np0005634532 python3.9[121802]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b6c14d0b55cba2dc5417fd01abcd38193dc120fc8cddbe0ea29f521c0ef4f4f8-merged.mount: Deactivated successfully.
Mar  1 04:48:14 np0005634532 podman[121546]: 2026-03-01 09:48:14.148964629 +0000 UTC m=+1.053094643 container remove 507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gifted_gates, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:48:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:14.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:14 np0005634532 systemd[1]: libpod-conmon-507bf9fe0ad14594042d651358729e16aa864d5b90274b353faf116b747a6024.scope: Deactivated successfully.
Mar  1 04:48:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:48:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:48:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v165: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 443 B/s rd, 0 op/s
Mar  1 04:48:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:14.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:14 np0005634532 python3.9[121922]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:14 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:14 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:15 np0005634532 python3.9[122075]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:15 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:15 np0005634532 python3.9[122154]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:48:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:16.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v166: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Mar  1 04:48:16 np0005634532 python3.9[122309]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:48:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:16.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:16 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:16 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654004140 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:16.964Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:17 np0005634532 python3.9[122462]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:48:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:17] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:17] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:48:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:17 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c00a6a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:48:17
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.log', '.nfs', '.mgr', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', '.rgw.root', 'volumes', 'default.rgw.meta']
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:48:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Mar  1 04:48:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:48:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:48:17 np0005634532 python3.9[122615]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:48:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:48:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:18.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:18 np0005634532 python3.9[122769]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:48:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v167: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 266 B/s rd, 0 op/s
Mar  1 04:48:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:18.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:18 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:48:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:18 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:18.848Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:48:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:18.848Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:18 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:19 np0005634532 python3.9[122924]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:48:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:19 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:20.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v168: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:48:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:20.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:20 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:20 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:48:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:20 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:21 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:21 np0005634532 python3.9[123081]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:48:22 np0005634532 python3.9[123237]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:48:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:22.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v169: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:48:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:22.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:22 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:22 np0005634532 python3.9[123391]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:48:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:22 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:23 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:23 np0005634532 python3.9[123544]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:48:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:23 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:48:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:23 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:48:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:24.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v170: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:48:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:24.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:24 np0005634532 python3.9[123700]: ansible-service_facts Invoked
Mar  1 04:48:24 np0005634532 network[123717]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:48:24 np0005634532 network[123718]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:48:24 np0005634532 network[123719]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:48:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:24 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:24 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:25 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:26.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v171: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:48:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:26.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:26 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:26 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:48:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:26 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:26.966Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:48:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:48:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:27 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:28.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v172: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:48:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:28.448 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:28 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:28.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:48:28 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:32990] [POST] [200] [0.002s] [4.0B] [48fcbc21-cc14-4ce2-b4d9-c9ad84ed5947] /api/prometheus_receiver
Mar  1 04:48:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:28 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:29 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:30.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:30 np0005634532 python3.9[124183]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:48:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v173: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:48:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:30.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:30 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:30 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c002ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:31 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094831 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:48:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:32.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v174: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:48:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:32.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:48:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:48:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:32 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658002690 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:32 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:33 np0005634532 python3.9[124365]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Mar  1 04:48:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:33 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:34.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v175: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:48:34 np0005634532 python3.9[124520]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:34.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:34 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:34 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:35 np0005634532 python3.9[124599]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:35 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:35 np0005634532 python3.9[124752]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:36.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:36 np0005634532 python3.9[124833]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v176: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:48:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:36.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:36 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:36.967Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:48:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:36 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:48:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:48:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:37 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:37 np0005634532 python3.9[124986]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:38.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v177: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:48:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:38.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:38 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:38 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:39 np0005634532 python3.9[125141]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:48:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:39 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:40.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:40 np0005634532 python3.9[125228]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:48:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v178: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:48:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:40.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:40 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:40 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:41 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:41 np0005634532 systemd[1]: session-42.scope: Deactivated successfully.
Mar  1 04:48:41 np0005634532 systemd[1]: session-42.scope: Consumed 23.367s CPU time.
Mar  1 04:48:41 np0005634532 systemd-logind[832]: Session 42 logged out. Waiting for processes to exit.
Mar  1 04:48:41 np0005634532 systemd-logind[832]: Removed session 42.
Mar  1 04:48:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:42.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v179: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:48:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:42.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:42 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:42 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:43 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:44.220 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v180: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:48:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:44.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:44 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:44 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:45 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:46.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v181: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:48:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:46.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:46 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:46 np0005634532 systemd-logind[832]: New session 43 of user zuul.
Mar  1 04:48:46 np0005634532 systemd[1]: Started Session 43 of User zuul.
Mar  1 04:48:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:46.968Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:48:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:46.970Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:48:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:46 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:48:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:47 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:48:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:47 np0005634532 python3.9[125419]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:48:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:48:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:48.227 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v182: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:48:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:48.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:48 np0005634532 python3.9[125574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:48 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:48 np0005634532 python3.9[125653]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:48 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc64c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:49 np0005634532 systemd[1]: session-43.scope: Deactivated successfully.
Mar  1 04:48:49 np0005634532 systemd[1]: session-43.scope: Consumed 1.594s CPU time.
Mar  1 04:48:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:49 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:49 np0005634532 systemd-logind[832]: Session 43 logged out. Waiting for processes to exit.
Mar  1 04:48:49 np0005634532 systemd-logind[832]: Removed session 43.
Mar  1 04:48:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:48:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:50.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:48:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v183: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:48:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:50.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:50 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:50 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc65c0038d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:51 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:52.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v184: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:48:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:52.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:52 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:52 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:53 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc654001090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:54.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v185: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:48:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:54.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:54 np0005634532 systemd-logind[832]: New session 44 of user zuul.
Mar  1 04:48:54 np0005634532 systemd[1]: Started Session 44 of User zuul.
Mar  1 04:48:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:54 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc67c001320 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:54 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc6680037a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:48:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[113825]: 01/03/2026 09:48:55 : epoch 69a40b17 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fc658003490 fd 39 proxy ignored for local
Mar  1 04:48:55 np0005634532 kernel: ganesha.nfsd[122952]: segfault at 50 ip 00007fc6ff1ac32e sp 00007fc688ff8210 error 4 in libntirpc.so.5.8[7fc6ff191000+2c000] likely on CPU 3 (core 0, socket 3)
Mar  1 04:48:55 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:48:55 np0005634532 systemd[1]: Started Process Core Dump (PID 125795/UID 0).
Mar  1 04:48:55 np0005634532 python3.9[125866]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:48:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:48:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:56.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:56 np0005634532 systemd-coredump[125810]: Process 113829 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 60:#012#0  0x00007fc6ff1ac32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 04:48:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v186: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:48:56 np0005634532 systemd[1]: systemd-coredump@1-125795-0.service: Deactivated successfully.
Mar  1 04:48:56 np0005634532 systemd[1]: systemd-coredump@1-125795-0.service: Consumed 1.012s CPU time.
Mar  1 04:48:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:56.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:56 np0005634532 podman[125955]: 2026-03-01 09:48:56.503673162 +0000 UTC m=+0.039770872 container died 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:48:56 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c1ab43381e7b5a515a778ff6e7a708dce7669ee1ea8ef1aa8ac685a518ea738c-merged.mount: Deactivated successfully.
Mar  1 04:48:56 np0005634532 systemd[94532]: Created slice User Background Tasks Slice.
Mar  1 04:48:56 np0005634532 systemd[94532]: Starting Cleanup of User's Temporary Files and Directories...
Mar  1 04:48:56 np0005634532 podman[125955]: 2026-03-01 09:48:56.557161482 +0000 UTC m=+0.093259152 container remove 665efd34b6f8ba58c026a42cebb164d63f9fbfee3cb7ebe6634f523e8cd32b85 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:48:56 np0005634532 systemd[94532]: Finished Cleanup of User's Temporary Files and Directories.
Mar  1 04:48:56 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:48:56 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:48:56 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.535s CPU time.
Mar  1 04:48:56 np0005634532 python3.9[126073]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:48:56.971Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:48:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:57] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:48:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:48:57] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:48:57 np0005634532 python3.9[126255]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:58 np0005634532 python3.9[126335]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.fl7mxovt recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:48:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:48:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:48:58.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:48:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v187: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:48:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:48:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:48:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:48:58.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:48:59 np0005634532 python3.9[126489]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:48:59 np0005634532 python3.9[126568]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.wn1qptv_ recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:00.248 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:00 np0005634532 python3.9[126723]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:49:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v188: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:00.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094900 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:49:01 np0005634532 python3.9[126876]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:01 np0005634532 python3.9[126955]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:49:02 np0005634532 python3.9[127109]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:02.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v189: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 04:49:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:02.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:49:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:49:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094902 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:49:02 np0005634532 python3.9[127189]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:49:03 np0005634532 python3.9[127342]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:03 np0005634532 python3.9[127495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:04 np0005634532 python3.9[127575]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:04.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v190: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:49:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:04.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:05 np0005634532 python3.9[127729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:05 np0005634532 python3.9[127808]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:06.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v191: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 04:49:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:06.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:06 np0005634532 python3.9[127963]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:49:06 np0005634532 systemd[1]: Reloading.
Mar  1 04:49:06 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:49:06 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:49:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:06.973Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:49:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:06.974Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:49:07 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 2.
Mar  1 04:49:07 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:49:07 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.535s CPU time.
Mar  1 04:49:07 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:07] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Mar  1 04:49:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:07] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Mar  1 04:49:07 np0005634532 podman[128081]: 2026-03-01 09:49:07.320177708 +0000 UTC m=+0.081819707 container create 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:49:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8849cf12b3558376d34278131419d4d1dcf5316fc77becc44bf4275d36a322eb/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8849cf12b3558376d34278131419d4d1dcf5316fc77becc44bf4275d36a322eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8849cf12b3558376d34278131419d4d1dcf5316fc77becc44bf4275d36a322eb/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8849cf12b3558376d34278131419d4d1dcf5316fc77becc44bf4275d36a322eb/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:07 np0005634532 podman[128081]: 2026-03-01 09:49:07.38397313 +0000 UTC m=+0.145615019 container init 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:49:07 np0005634532 podman[128081]: 2026-03-01 09:49:07.291391669 +0000 UTC m=+0.053033618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:07 np0005634532 podman[128081]: 2026-03-01 09:49:07.396869098 +0000 UTC m=+0.158510957 container start 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:49:07 np0005634532 bash[128081]: 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a
Mar  1 04:49:07 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:49:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:49:07 np0005634532 python3.9[128265]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:08 np0005634532 python3.9[128345]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:08.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v192: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 04:49:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:08.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:08 np0005634532 python3.9[128499]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:09 np0005634532 python3.9[128578]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:10 np0005634532 python3.9[128732]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:49:10 np0005634532 systemd[1]: Reloading.
Mar  1 04:49:10 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:49:10 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:49:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:10.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v193: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Mar  1 04:49:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:10.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:10 np0005634532 systemd[1]: Starting Create netns directory...
Mar  1 04:49:10 np0005634532 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Mar  1 04:49:10 np0005634532 systemd[1]: netns-placeholder.service: Deactivated successfully.
Mar  1 04:49:10 np0005634532 systemd[1]: Finished Create netns directory.
Mar  1 04:49:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:11 np0005634532 python3.9[128956]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:49:11 np0005634532 network[128973]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:49:11 np0005634532 network[128974]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:49:11 np0005634532 network[128975]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:49:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v194: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Mar  1 04:49:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:12.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:13 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:49:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:13 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:49:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:14.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v195: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:49:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:14.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.675748866 +0000 UTC m=+0.040828907 container create f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 04:49:15 np0005634532 systemd[1]: Started libpod-conmon-f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4.scope.
Mar  1 04:49:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.659691901 +0000 UTC m=+0.024771962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.766741758 +0000 UTC m=+0.131821799 container init f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.775276739 +0000 UTC m=+0.140356820 container start f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.779432661 +0000 UTC m=+0.144512702 container attach f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:49:15 np0005634532 optimistic_heisenberg[129273]: 167 167
Mar  1 04:49:15 np0005634532 systemd[1]: libpod-f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4.scope: Deactivated successfully.
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.782122057 +0000 UTC m=+0.147202118 container died f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:49:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e3b3eb5b2ca642ef8ca678245a43fc952dfcfaef64b80fdaa471cc4bf8d6451a-merged.mount: Deactivated successfully.
Mar  1 04:49:15 np0005634532 podman[129257]: 2026-03-01 09:49:15.820317618 +0000 UTC m=+0.185397659 container remove f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_heisenberg, CEPH_REF=squid, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:49:15 np0005634532 systemd[1]: libpod-conmon-f9dabf35d7a4d0084c2cb1ca6ef2c38a47d58b58604524a775d01f9e218222f4.scope: Deactivated successfully.
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:15 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:49:15 np0005634532 podman[129299]: 2026-03-01 09:49:15.965481785 +0000 UTC m=+0.049312826 container create cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 04:49:16 np0005634532 systemd[1]: Started libpod-conmon-cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d.scope.
Mar  1 04:49:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:15.941365051 +0000 UTC m=+0.025196162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:16.059636194 +0000 UTC m=+0.143467285 container init cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:16.065809517 +0000 UTC m=+0.149640548 container start cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:16.069229201 +0000 UTC m=+0.153060292 container attach cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:49:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:16.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:16 np0005634532 focused_blackburn[129315]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:49:16 np0005634532 focused_blackburn[129315]: --> All data devices are unavailable
Mar  1 04:49:16 np0005634532 systemd[1]: libpod-cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d.scope: Deactivated successfully.
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:16.381023403 +0000 UTC m=+0.464854484 container died cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 04:49:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v196: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Mar  1 04:49:16 np0005634532 systemd[1]: var-lib-containers-storage-overlay-878f5036d6f18d3f3da59ce0774834e76e94634cdb7f018962304047ef74f0a9-merged.mount: Deactivated successfully.
Mar  1 04:49:16 np0005634532 podman[129299]: 2026-03-01 09:49:16.433515406 +0000 UTC m=+0.517346447 container remove cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_blackburn, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:49:16 np0005634532 systemd[1]: libpod-conmon-cfb9b2151bee1139a7ecc9edd7340e4dc81b46bfe04628ef954cf50dbbc1a08d.scope: Deactivated successfully.
Mar  1 04:49:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:16.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:16 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:49:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:16 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:49:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:16 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:49:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:16 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:49:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:16.975Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:49:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:17] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:17] "GET /metrics HTTP/1.1" 200 48334 "" "Prometheus/2.51.0"
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.095045113 +0000 UTC m=+0.053652653 container create 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 04:49:17 np0005634532 systemd[1]: Started libpod-conmon-6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346.scope.
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.068888779 +0000 UTC m=+0.027496369 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:17 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.185708267 +0000 UTC m=+0.144315817 container init 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.198243256 +0000 UTC m=+0.156850806 container start 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.202938191 +0000 UTC m=+0.161545741 container attach 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:49:17 np0005634532 distracted_hamilton[129480]: 167 167
Mar  1 04:49:17 np0005634532 systemd[1]: libpod-6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346.scope: Deactivated successfully.
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.205859343 +0000 UTC m=+0.164466863 container died 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:49:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b5eac7f70e424d54a606fb749b870cd9de69c863e9aafd59b7366238e8d5a6f0-merged.mount: Deactivated successfully.
Mar  1 04:49:17 np0005634532 podman[129464]: 2026-03-01 09:49:17.252266747 +0000 UTC m=+0.210874297 container remove 6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_hamilton, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:49:17 np0005634532 systemd[1]: libpod-conmon-6a7f9cfd5581c2c78c9c30fb255d264a38d29d92e8249d61879437c07476f346.scope: Deactivated successfully.
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.432257171 +0000 UTC m=+0.051907480 container create 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:49:17
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', '.nfs', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'images', 'default.rgw.meta']
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:49:17 np0005634532 systemd[1]: Started libpod-conmon-09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5.scope.
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.408785783 +0000 UTC m=+0.028436132 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:17 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:49:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:49:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c63b19d07471aa23277f7e930643b6368a9fe0b251d22e4f19d57634e5dda4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c63b19d07471aa23277f7e930643b6368a9fe0b251d22e4f19d57634e5dda4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c63b19d07471aa23277f7e930643b6368a9fe0b251d22e4f19d57634e5dda4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97c63b19d07471aa23277f7e930643b6368a9fe0b251d22e4f19d57634e5dda4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.547973342 +0000 UTC m=+0.167623711 container init 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.5584257 +0000 UTC m=+0.178075999 container start 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.563020263 +0000 UTC m=+0.182670572 container attach 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:49:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]: {
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:    "0": [
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:        {
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "devices": [
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "/dev/loop3"
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            ],
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "lv_name": "ceph_lv0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "lv_size": "21470642176",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "name": "ceph_lv0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "tags": {
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.cluster_name": "ceph",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.crush_device_class": "",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.encrypted": "0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.osd_id": "0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.type": "block",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.vdo": "0",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:                "ceph.with_tpm": "0"
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            },
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "type": "block",
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:            "vg_name": "ceph_vg0"
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:        }
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]:    ]
Mar  1 04:49:17 np0005634532 unruffled_shamir[129523]: }
Mar  1 04:49:17 np0005634532 systemd[1]: libpod-09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5.scope: Deactivated successfully.
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.923876643 +0000 UTC m=+0.543526922 container died 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:49:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-97c63b19d07471aa23277f7e930643b6368a9fe0b251d22e4f19d57634e5dda4-merged.mount: Deactivated successfully.
Mar  1 04:49:17 np0005634532 podman[129506]: 2026-03-01 09:49:17.973150287 +0000 UTC m=+0.592800566 container remove 09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=unruffled_shamir, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:49:17 np0005634532 systemd[1]: libpod-conmon-09f0b543ff0d22216c365fa2889301f9a8429e9059dca4a0cb19cedfdc1831d5.scope: Deactivated successfully.
Mar  1 04:49:18 np0005634532 python3.9[129660]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:18.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v197: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:49:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:18.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:18 np0005634532 python3.9[129805]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.554414078 +0000 UTC m=+0.039523375 container create a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 04:49:18 np0005634532 systemd[1]: Started libpod-conmon-a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a.scope.
Mar  1 04:49:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.624479094 +0000 UTC m=+0.109588421 container init a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.535118543 +0000 UTC m=+0.020227870 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.634340407 +0000 UTC m=+0.119449704 container start a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.637605728 +0000 UTC m=+0.122715025 container attach a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:49:18 np0005634532 frosty_clarke[129863]: 167 167
Mar  1 04:49:18 np0005634532 systemd[1]: libpod-a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a.scope: Deactivated successfully.
Mar  1 04:49:18 np0005634532 conmon[129863]: conmon a010795bbffad1619c29 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a.scope/container/memory.events
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.641430662 +0000 UTC m=+0.126539959 container died a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:49:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-35ae8066d92ecfe434a90a26ae8a31aebeb5bcb6c3209fccb5f8be65b25f2478-merged.mount: Deactivated successfully.
Mar  1 04:49:18 np0005634532 podman[129846]: 2026-03-01 09:49:18.672127298 +0000 UTC m=+0.157236595 container remove a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_clarke, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:49:18 np0005634532 systemd[1]: libpod-conmon-a010795bbffad1619c29673035ed158884c4f0ecdaa9b8becbd71f42fafb681a.scope: Deactivated successfully.
Mar  1 04:49:18 np0005634532 podman[129926]: 2026-03-01 09:49:18.833944495 +0000 UTC m=+0.058956963 container create c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:49:18 np0005634532 systemd[1]: Started libpod-conmon-c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7.scope.
Mar  1 04:49:18 np0005634532 podman[129926]: 2026-03-01 09:49:18.806859508 +0000 UTC m=+0.031872036 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:49:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:49:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5740b35a3f30e676600fd33f45815d35e47c311eedb3faf2276c40814e4eee80/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5740b35a3f30e676600fd33f45815d35e47c311eedb3faf2276c40814e4eee80/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5740b35a3f30e676600fd33f45815d35e47c311eedb3faf2276c40814e4eee80/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:18 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5740b35a3f30e676600fd33f45815d35e47c311eedb3faf2276c40814e4eee80/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:49:18 np0005634532 podman[129926]: 2026-03-01 09:49:18.931919199 +0000 UTC m=+0.156931657 container init c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:49:18 np0005634532 podman[129926]: 2026-03-01 09:49:18.93845447 +0000 UTC m=+0.163466908 container start c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:49:18 np0005634532 podman[129926]: 2026-03-01 09:49:18.941490535 +0000 UTC m=+0.166503023 container attach c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:49:19 np0005634532 python3.9[130060]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:49:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:49:19 np0005634532 lvm[130286]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:49:19 np0005634532 lvm[130286]: VG ceph_vg0 finished
Mar  1 04:49:19 np0005634532 agitated_franklin[130000]: {}
Mar  1 04:49:19 np0005634532 systemd[1]: libpod-c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7.scope: Deactivated successfully.
Mar  1 04:49:19 np0005634532 systemd[1]: libpod-c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7.scope: Consumed 1.187s CPU time.
Mar  1 04:49:19 np0005634532 podman[129926]: 2026-03-01 09:49:19.759347975 +0000 UTC m=+0.984360423 container died c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:49:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5740b35a3f30e676600fd33f45815d35e47c311eedb3faf2276c40814e4eee80-merged.mount: Deactivated successfully.
Mar  1 04:49:19 np0005634532 podman[129926]: 2026-03-01 09:49:19.797292929 +0000 UTC m=+1.022305367 container remove c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_franklin, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:49:19 np0005634532 systemd[1]: libpod-conmon-c086b605691c14751ca98fba1c06aff8d0058f218d285da94f2570f9c94d8bd7.scope: Deactivated successfully.
Mar  1 04:49:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:49:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:49:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:19 np0005634532 python3.9[130296]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:20.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:20 np0005634532 python3.9[130415]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v198: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Mar  1 04:49:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:20.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:20 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:20 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:49:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:20 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:21 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:21 np0005634532 python3.9[130571]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Mar  1 04:49:21 np0005634532 systemd[1]: Starting Time & Date Service...
Mar  1 04:49:21 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:49:21 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:49:21 np0005634532 systemd[1]: Started Time & Date Service.
Mar  1 04:49:22 np0005634532 python3.9[130731]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:22.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v199: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:49:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:22.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094922 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:49:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:22 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f8000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:22 np0005634532 python3.9[130884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/094923 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:49:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:22 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:23 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:23 np0005634532 python3.9[130963]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:24 np0005634532 python3.9[131117]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:24.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v200: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:49:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:24.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:24 np0005634532 python3.9[131197]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.nd4k6q2h recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:24 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d80016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:25 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f80021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:25 np0005634532 python3.9[131350]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:25 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:25 np0005634532 python3.9[131429]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:26.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v201: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Mar  1 04:49:26 np0005634532 python3.9[131584]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:49:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:26.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:26 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:26.977Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:49:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:26.977Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:49:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:27 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:49:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:27] "GET /metrics HTTP/1.1" 200 48331 "" "Prometheus/2.51.0"
Mar  1 04:49:27 np0005634532 python3[131738]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Mar  1 04:49:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:27 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f80021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:28 np0005634532 python3.9[131892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:28.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v202: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 511 B/s wr, 2 op/s
Mar  1 04:49:28 np0005634532 python3.9[131972]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:28.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:28 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:29 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:29 np0005634532 python3.9[132125]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:29 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:29 np0005634532 python3.9[132251]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358568.7026598-894-7060493198737/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:30.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v203: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 511 B/s wr, 2 op/s
Mar  1 04:49:30 np0005634532 python3.9[132408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:30.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:30 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f80021f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:30 np0005634532 python3.9[132487]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:31 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:31 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:31 np0005634532 python3.9[132665]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:32 np0005634532 python3.9[132745]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:32.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v204: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:49:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:49:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:49:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:32.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:32 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:32 np0005634532 python3.9[132899]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:33 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:33 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:33 np0005634532 python3.9[132978]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:34 np0005634532 python3.9[133132]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:49:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:34.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v205: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:49:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:49:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:34.516 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:49:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:34 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8001fc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:34 np0005634532 python3.9[133289]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f80095a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:35 np0005634532 python3.9[133442]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:36.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:36 np0005634532 python3.9[133597]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v206: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:36.518 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:36 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:36.978Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:49:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:37 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:37] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:49:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:37] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:49:37 np0005634532 python3.9[133750]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Mar  1 04:49:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:37 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:37 np0005634532 python3.9[133903]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Mar  1 04:49:38 np0005634532 systemd[1]: session-44.scope: Deactivated successfully.
Mar  1 04:49:38 np0005634532 systemd[1]: session-44.scope: Consumed 29.424s CPU time.
Mar  1 04:49:38 np0005634532 systemd-logind[832]: Session 44 logged out. Waiting for processes to exit.
Mar  1 04:49:38 np0005634532 systemd-logind[832]: Removed session 44.
Mar  1 04:49:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:38.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v207: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:38.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:38 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:39 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:39 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d80032f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:40.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v208: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:49:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:40.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:40 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:41 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:41 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:42.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v209: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:42.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:42 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:43 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:43 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:44 np0005634532 systemd-logind[832]: New session 45 of user zuul.
Mar  1 04:49:44 np0005634532 systemd[1]: Started Session 45 of User zuul.
Mar  1 04:49:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:44.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v210: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:49:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:44.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:44 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:45 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:45 np0005634532 python3.9[134092]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Mar  1 04:49:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:45 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001c00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:45 np0005634532 python3.9[134245]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:49:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:46.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v211: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:46.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:46 np0005634532 python3.9[134402]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Mar  1 04:49:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:46 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:46.979Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:49:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:47 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:47] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:47] "GET /metrics HTTP/1.1" 200 48330 "" "Prometheus/2.51.0"
Mar  1 04:49:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:47 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:47 np0005634532 python3.9[134555]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible.k2y57la7 follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:49:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:49:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:49:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:49:48 np0005634532 python3.9[134682]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible.k2y57la7 mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358587.0101552-102-71517526559743/.source.k2y57la7 _original_basename=.tyzbsu2y follow=False checksum=35ce90e33d3c340d14465be44574a0689f5ddbae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:48.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v212: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:48.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:48 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:49 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:49 np0005634532 python3.9[134836]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:49:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:49 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:49 np0005634532 python3.9[134990]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDU08zL9v37ramJn0dw/GX+LnniiZDLVr1ufHB/vIoDMP6yH59FLpNVDjEXrrtecyTkPLwKua9rkeH+LqTXMiB0tr9HYoJOEb348Hyr67QWxzKTdjGFGgEqL0L3VPIjZyqza4c/Idsc23VWcoG2BVjC2P1FvakwqeAGDyD3k9CO4JDwxUZk06JC0RBFaU2R2iQ8B3MpTcuymIJj64xDxFYOChy5pBE+Uhx7TYHKeTsgYvBNYsV8TF4h8RtzMxr4uyTRqlyj/AhdZZRDli02ht2fN9xBLfKoGujAuk0NAUXmUT30qYbWjLdFiLOrwS+9Yk9N/YsXZG7sz14WjxJIarNBUwsfSccZx3STdibyj6N9EOTVjTl1FZEM9XR2DybfWf+gPuXgSYOKGsVN90ATw5TnzAO7AMalEjigAv92Mpvas4SGgqfj/0MQ16DqrdjTF4lcOPC4PWrdyT8oJHM/FoUPA0tv+jdeJxqyAWt7SOCBmACsMG8zQwc5prDRdvaQNdE=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPd7MfC4EfSXNqkx3ZGh1BTjDNatkRaRNQxJVpfOH7en#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKygj9MIB5PtB8xqboDNvY9c1sHw6GGIMIlKuX4Vf22zEAE/0z3WEsO6MS3bJQKJ0YbOlDB8FirRHsR4xyD5G6g=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDOJ164QLG+v0QCrqtWeikJkAAqlq5h4ZiMVzeVdQOaEkqQhYc/2VA3QSCqAFSmG7JdVFkN/LVtEx5g4NHEpVpppXz8yVYkQPsBR2XYD6WZsiWbaOefR+spqTVxOhMg/I/q5rJ4u1gDDVVc+UK/d99tOEugxHzXGIDFeH5NkXCD1ZYOPISetLGdcqRWIkasLnAEpx/FT+ObN03Tglla0WDb+62BSR0zaPhy8lLS6Q57KfiGZpmDQsbzlXJjAorS1T4XKzyvcDnMeQubWI7IL4YWasSie9Xag/ejOje77NietOgR7VJ/6VTXUTo6m+DvTVsibFfdrpb+a2Lf7YUqcAia2r5ukcmhbckf90+2bvBg0s+w6TJQp2CfsMPbiu9XQAZ1jlkej2GQm5DinOrKxpfv7pi05w4ngJd+IltL7rwIp9SQNh9ywSmua6xPMSMQZgwjK4N1j6ztvQNNppNC47CSv5a3lVSO8A4KnjyW9SSxQm0byALScVy0w5nqFXnta68=#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOUemQYd3c9hI8oDWBIKmGQ5QqvNjLewcRYP6Hv4PK7N#012compute-1.ctlplane.example.com,192.168.122.101,compute-1* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJME/3XoZEzUtru+DrmzuJsB9ikDG73pBPEngHZK244wqqblcgz9hmV+MIHN8QeqtxjaJFT4WxbJGxe84tZ/wGU=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBmOUMfQBRkdlCmLdVTvO1yRyjEcD4t0OKIEpbh4ryAc2GWzRMRNDLU60+739+UcQsrdVyxTiXF+1b4R27n+WM5m+PSUaUS47kzM15ltgEEBIwbd8kDwFei103QQ6PPs1fkZNPV4IY5g6yaqTRNygE3+8d4WEAIkBERkGRuKYKK28m/GirDbl7l9VIuQCla39ATTqNIAuB55hGGVkoC+TE5DA0lgQNdUHCvuTNNhYMozVQCbj0TWAW6LGA6TyLOAmowQp6xPhpY9CkvE12YdSx9sF96i6qh8RI/l/w/F0bwaUWLp/Bd4sC5TSiZHeatJnSxjfxf2Z+hi6yyVBiy1zRmyvgrn/40B3pT/sihT/7GEWNaTXopKzOJTOCXF+R1vIjwO6J6u/e6Vk1RG79gX7agHwtKoRVYzed99IaBe2d7JF1rlq6oXPaPpowgr0cdLi25GovhNGA9h8/y/M2MBBk4ls/Pzhjqj+VNmj0sJtLKAdCOWKchWN8lVG4mz/m/R8=#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIL01Ha5eRQ6w0kkkdALy1Rwciw5vN8MWCQgukICidqXU#012compute-2.ctlplane.example.com,192.168.122.102,compute-2* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI/N47g90Bj7eIRfJGkuhjkyR6CMjBlH0FE3oL+RNHXqGcdV4sHpT/3R+7aiSZj+EXGyAG7KQXVmh9UoTuwFT5k=#012 create=True mode=0644 path=/tmp/ansible.k2y57la7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:50.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v213: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:49:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:50.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:50 np0005634532 python3.9[135144]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.k2y57la7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:49:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:50 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:51 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0003cc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:51 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:51 np0005634532 python3.9[135326]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.k2y57la7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:49:51 np0005634532 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar  1 04:49:51 np0005634532 systemd[1]: session-45.scope: Deactivated successfully.
Mar  1 04:49:51 np0005634532 systemd[1]: session-45.scope: Consumed 4.683s CPU time.
Mar  1 04:49:51 np0005634532 systemd-logind[832]: Session 45 logged out. Waiting for processes to exit.
Mar  1 04:49:51 np0005634532 systemd-logind[832]: Removed session 45.
Mar  1 04:49:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:52.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v214: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:52.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:52 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:53 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:53 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:54.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v215: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:49:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:54.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:54 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:55 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:55 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc0013a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:49:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:56.396 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v216: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:49:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:56.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:49:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:56 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:56.980Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:49:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:49:56.981Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:49:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:57 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:49:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:49:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:49:57 np0005634532 systemd-logind[832]: New session 46 of user zuul.
Mar  1 04:49:57 np0005634532 systemd[1]: Started Session 46 of User zuul.
Mar  1 04:49:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:57 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:58 np0005634532 python3.9[135521]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:49:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:49:58.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v217: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:49:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:49:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:49:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:49:58.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:49:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:58 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:59 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:49:59 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:49:59 np0005634532 python3.9[135679]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Mar  1 04:50:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : overall HEALTH_OK
Mar  1 04:50:00 np0005634532 python3.9[135835]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:50:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:00.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v218: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:50:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:00.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:00 np0005634532 ceph-mon[75825]: overall HEALTH_OK
Mar  1 04:50:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:00 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:01 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:01 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:02 np0005634532 python3.9[135991]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:50:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:02.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v219: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:50:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:50:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:02.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:02 np0005634532 systemd-logind[832]: Session 18 logged out. Waiting for processes to exit.
Mar  1 04:50:02 np0005634532 systemd[1]: session-18.scope: Deactivated successfully.
Mar  1 04:50:02 np0005634532 systemd[1]: session-18.scope: Consumed 1min 36.615s CPU time.
Mar  1 04:50:02 np0005634532 systemd-logind[832]: Removed session 18.
Mar  1 04:50:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:02 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:02 np0005634532 python3.9[136146]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:50:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:03 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc002090 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:03 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:03 np0005634532 python3.9[136299]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:04 np0005634532 systemd[1]: session-46.scope: Deactivated successfully.
Mar  1 04:50:04 np0005634532 systemd[1]: session-46.scope: Consumed 3.924s CPU time.
Mar  1 04:50:04 np0005634532 systemd-logind[832]: Session 46 logged out. Waiting for processes to exit.
Mar  1 04:50:04 np0005634532 systemd-logind[832]: Removed session 46.
Mar  1 04:50:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:04.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v220: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:50:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:04.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:04 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:05 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:05 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:06.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v221: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:06.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:06 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:06.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:50:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:06.981Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:50:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:06.982Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:50:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:07 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:08.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v222: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:08.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:08 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:09 np0005634532 systemd-logind[832]: New session 47 of user zuul.
Mar  1 04:50:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:09 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:09 np0005634532 systemd[1]: Started Session 47 of User zuul.
Mar  1 04:50:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:09 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003190 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:09 np0005634532 python3.9[136483]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:50:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:10.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v223: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:50:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:10.552 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:10 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:11 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:11 np0005634532 python3.9[136642]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:50:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:11 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:12 np0005634532 python3.9[136755]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Mar  1 04:50:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:12.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v224: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:12.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:12 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:13 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:13 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:14 np0005634532 python3.9[136910]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:50:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:14.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v225: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:50:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:14.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:14 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:15 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:15 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:15 np0005634532 python3.9[137062]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:50:16 np0005634532 python3.9[137213]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:50:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:16.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v226: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:16.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:16 np0005634532 python3.9[137364]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/nova follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:50:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:16 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:16.984Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:50:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:17 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:17 np0005634532 systemd[1]: session-47.scope: Deactivated successfully.
Mar  1 04:50:17 np0005634532 systemd[1]: session-47.scope: Consumed 5.718s CPU time.
Mar  1 04:50:17 np0005634532 systemd-logind[832]: Session 47 logged out. Waiting for processes to exit.
Mar  1 04:50:17 np0005634532 systemd-logind[832]: Removed session 47.
Mar  1 04:50:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:17 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:50:17
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', 'backups', '.rgw.root', 'default.rgw.control', '.nfs', 'default.rgw.meta', '.mgr', 'vms', 'volumes']
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:50:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:50:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:50:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:50:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:18.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v227: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:18.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:18 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:19 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:20.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v228: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:50:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:20.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:50:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:50:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:20 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:21 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:21 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f800a2b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.409703108 +0000 UTC m=+0.051474789 container create db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:50:21 np0005634532 systemd[1]: Started libpod-conmon-db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582.scope.
Mar  1 04:50:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.387377498 +0000 UTC m=+0.029149259 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.495163753 +0000 UTC m=+0.136935534 container init db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.505536759 +0000 UTC m=+0.147308440 container start db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.509547977 +0000 UTC m=+0.151319758 container attach db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:50:21 np0005634532 lucid_feistel[137585]: 167 167
Mar  1 04:50:21 np0005634532 systemd[1]: libpod-db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582.scope: Deactivated successfully.
Mar  1 04:50:21 np0005634532 conmon[137585]: conmon db93292b9f5619b20511 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582.scope/container/memory.events
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.513250619 +0000 UTC m=+0.155022300 container died db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:50:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-01d9011f5f1569727186eca063c710cc9c721a6d5e27463e5baac251d5b4ccea-merged.mount: Deactivated successfully.
Mar  1 04:50:21 np0005634532 podman[137569]: 2026-03-01 09:50:21.567673009 +0000 UTC m=+0.209444680 container remove db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_feistel, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 04:50:21 np0005634532 systemd[1]: libpod-conmon-db93292b9f5619b20511f421355267c24a821f3e1d58031a7bdbd14e380e9582.scope: Deactivated successfully.
Mar  1 04:50:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:50:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:21 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:50:21 np0005634532 podman[137612]: 2026-03-01 09:50:21.720402141 +0000 UTC m=+0.048899635 container create 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:50:21 np0005634532 systemd[1]: Started libpod-conmon-7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232.scope.
Mar  1 04:50:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:21 np0005634532 podman[137612]: 2026-03-01 09:50:21.699930517 +0000 UTC m=+0.028428031 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:21 np0005634532 podman[137612]: 2026-03-01 09:50:21.803210811 +0000 UTC m=+0.131708315 container init 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:50:21 np0005634532 podman[137612]: 2026-03-01 09:50:21.816430296 +0000 UTC m=+0.144927790 container start 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 04:50:21 np0005634532 podman[137612]: 2026-03-01 09:50:21.820075456 +0000 UTC m=+0.148572950 container attach 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:50:22 np0005634532 keen_feynman[137629]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:50:22 np0005634532 keen_feynman[137629]: --> All data devices are unavailable
Mar  1 04:50:22 np0005634532 systemd[1]: libpod-7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232.scope: Deactivated successfully.
Mar  1 04:50:22 np0005634532 podman[137612]: 2026-03-01 09:50:22.155418666 +0000 UTC m=+0.483916190 container died 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:50:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6be9ff0cc9ed57d2a750e23e09865c10006b58f0f958775014af066593a35070-merged.mount: Deactivated successfully.
Mar  1 04:50:22 np0005634532 podman[137612]: 2026-03-01 09:50:22.248667533 +0000 UTC m=+0.577165047 container remove 7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:50:22 np0005634532 systemd[1]: libpod-conmon-7b467f96b408e78c73e0aa935dc342df59c38484e66c47ed1b7622c496791232.scope: Deactivated successfully.
Mar  1 04:50:22 np0005634532 systemd-logind[832]: New session 48 of user zuul.
Mar  1 04:50:22 np0005634532 systemd[1]: Started Session 48 of User zuul.
Mar  1 04:50:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:22.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v229: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:22.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.861727793 +0000 UTC m=+0.047965373 container create aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:50:22 np0005634532 systemd[1]: Started libpod-conmon-aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506.scope.
Mar  1 04:50:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:22 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.834451271 +0000 UTC m=+0.020688831 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.947919486 +0000 UTC m=+0.134157036 container init aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.957901612 +0000 UTC m=+0.144139172 container start aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:50:22 np0005634532 quizzical_haibt[137891]: 167 167
Mar  1 04:50:22 np0005634532 systemd[1]: libpod-aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506.scope: Deactivated successfully.
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.970199175 +0000 UTC m=+0.156436775 container attach aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 04:50:22 np0005634532 podman[137841]: 2026-03-01 09:50:22.970737528 +0000 UTC m=+0.156975098 container died aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:50:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-25d219de32486cbfe04c0319a6fa559073cc762275208eec937376b475b41175-merged.mount: Deactivated successfully.
Mar  1 04:50:23 np0005634532 podman[137841]: 2026-03-01 09:50:23.027077986 +0000 UTC m=+0.213315556 container remove aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:50:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:23 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d8004000 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:23 np0005634532 systemd[1]: libpod-conmon-aeaf3792826e7d3e98730e900a249d660c39bc17e4e28162b377a9292dd3e506.scope: Deactivated successfully.
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.22052482 +0000 UTC m=+0.078832732 container create 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:50:23 np0005634532 systemd[1]: Started libpod-conmon-4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4.scope.
Mar  1 04:50:23 np0005634532 python3.9[137921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.184096523 +0000 UTC m=+0.042404525 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:23 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8b6331ee3a49eccbadab3c2a313a434ab19095b53d2c4cc903f48d2f97744/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8b6331ee3a49eccbadab3c2a313a434ab19095b53d2c4cc903f48d2f97744/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8b6331ee3a49eccbadab3c2a313a434ab19095b53d2c4cc903f48d2f97744/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:23 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bdf8b6331ee3a49eccbadab3c2a313a434ab19095b53d2c4cc903f48d2f97744/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.346410261 +0000 UTC m=+0.204718203 container init 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.365802879 +0000 UTC m=+0.224110791 container start 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.381732781 +0000 UTC m=+0.240040693 container attach 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:50:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:23 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]: {
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:    "0": [
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:        {
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "devices": [
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "/dev/loop3"
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            ],
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "lv_name": "ceph_lv0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "lv_size": "21470642176",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "name": "ceph_lv0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "tags": {
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.cluster_name": "ceph",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.crush_device_class": "",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.encrypted": "0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.osd_id": "0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.type": "block",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.vdo": "0",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:                "ceph.with_tpm": "0"
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            },
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "type": "block",
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:            "vg_name": "ceph_vg0"
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:        }
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]:    ]
Mar  1 04:50:23 np0005634532 quizzical_bartik[137959]: }
Mar  1 04:50:23 np0005634532 systemd[1]: libpod-4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4.scope: Deactivated successfully.
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.727437226 +0000 UTC m=+0.585745168 container died 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:50:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-bdf8b6331ee3a49eccbadab3c2a313a434ab19095b53d2c4cc903f48d2f97744-merged.mount: Deactivated successfully.
Mar  1 04:50:23 np0005634532 podman[137943]: 2026-03-01 09:50:23.797256286 +0000 UTC m=+0.655564208 container remove 4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_bartik, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:50:23 np0005634532 systemd[1]: libpod-conmon-4ff42abeddc044800ce4a20ad1b36ff8044f0788db0377c2b496f2ca535a73e4.scope: Deactivated successfully.
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.42387498 +0000 UTC m=+0.042039476 container create c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:50:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:24.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v230: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:50:24 np0005634532 systemd[1]: Started libpod-conmon-c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39.scope.
Mar  1 04:50:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.403135719 +0000 UTC m=+0.021300235 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.507133081 +0000 UTC m=+0.125297597 container init c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.513898717 +0000 UTC m=+0.132063203 container start c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:50:24 np0005634532 dazzling_fermi[138171]: 167 167
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.518197463 +0000 UTC m=+0.136361959 container attach c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:50:24 np0005634532 systemd[1]: libpod-c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39.scope: Deactivated successfully.
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.519382042 +0000 UTC m=+0.137546528 container died c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 04:50:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-465a418d0fb9809b59467310fae5db020c09a97eb6ccdd9c4a6efec0ff162319-merged.mount: Deactivated successfully.
Mar  1 04:50:24 np0005634532 podman[138153]: 2026-03-01 09:50:24.560514685 +0000 UTC m=+0.178679181 container remove c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:50:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:24.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:24 np0005634532 systemd[1]: libpod-conmon-c9e841c6c1240bf8e732c466fede8038b4fede9f28ce42b2fbef6270521c0c39.scope: Deactivated successfully.
Mar  1 04:50:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095024 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:50:24 np0005634532 podman[138241]: 2026-03-01 09:50:24.724529474 +0000 UTC m=+0.048243429 container create 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:50:24 np0005634532 systemd[1]: Started libpod-conmon-52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81.scope.
Mar  1 04:50:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:50:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a198770086de528245146e190ee0875c5fac61d45d5f38c91171a3ede2a16a0c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a198770086de528245146e190ee0875c5fac61d45d5f38c91171a3ede2a16a0c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a198770086de528245146e190ee0875c5fac61d45d5f38c91171a3ede2a16a0c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a198770086de528245146e190ee0875c5fac61d45d5f38c91171a3ede2a16a0c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:24 np0005634532 podman[138241]: 2026-03-01 09:50:24.702072831 +0000 UTC m=+0.025786796 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:24 np0005634532 podman[138241]: 2026-03-01 09:50:24.818297584 +0000 UTC m=+0.142011549 container init 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:50:24 np0005634532 podman[138241]: 2026-03-01 09:50:24.829380287 +0000 UTC m=+0.153094232 container start 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:50:24 np0005634532 podman[138241]: 2026-03-01 09:50:24.845977226 +0000 UTC m=+0.169691171 container attach 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:50:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:24 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:24 np0005634532 python3.9[138284]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:25 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e40041f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:25 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d0000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:25 np0005634532 lvm[138516]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:50:25 np0005634532 lvm[138516]: VG ceph_vg0 finished
Mar  1 04:50:25 np0005634532 python3.9[138493]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:25 np0005634532 lvm[138518]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:50:25 np0005634532 lvm[138518]: VG ceph_vg0 finished
Mar  1 04:50:25 np0005634532 modest_bhaskara[138287]: {}
Mar  1 04:50:25 np0005634532 systemd[1]: libpod-52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81.scope: Deactivated successfully.
Mar  1 04:50:25 np0005634532 podman[138241]: 2026-03-01 09:50:25.545697491 +0000 UTC m=+0.869411456 container died 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:50:25 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a198770086de528245146e190ee0875c5fac61d45d5f38c91171a3ede2a16a0c-merged.mount: Deactivated successfully.
Mar  1 04:50:25 np0005634532 podman[138241]: 2026-03-01 09:50:25.596539483 +0000 UTC m=+0.920253428 container remove 52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=modest_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:50:25 np0005634532 systemd[1]: libpod-conmon-52cd18c0ec06e283881b1c1730480b46a8e0be5e084a3a6caa81f5c30a5a9f81.scope: Deactivated successfully.
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:25 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:50:26 np0005634532 python3.9[138713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v231: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:50:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:26.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:26.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:26 np0005634532 python3.9[138838]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358625.623988-149-121253228805377/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=6c1032a915c2a73e28d3426ca06d496bd1601856 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:26 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0001080 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:50:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:26.985Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:50:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:26.986Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:50:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:27 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:50:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:50:27 np0005634532 python3.9[138991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:27 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004210 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:27 np0005634532 python3.9[139115]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358626.8691735-149-214106473170268/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1a7c1a57089797aab977557a49307aae00101928 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:28 np0005634532 python3.9[139270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v232: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:50:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:28.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:28.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:28 np0005634532 python3.9[139394]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358627.8921316-149-235742750120639/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ca5502a5b62fbdd02c31f0c2b3f640dd945e05ca backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:28 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:29 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0002420 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:29 np0005634532 python3.9[139547]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:29 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:29 np0005634532 python3.9[139700]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v233: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:50:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:30.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:30 np0005634532 python3.9[139855]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:30.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:30 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004230 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:30 np0005634532 python3.9[139979]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358630.1110914-326-113447142761858/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=c934b9d3e621c69a9e88fbb1d7a64b7c4350c64a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:31 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.186392) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631186471, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 1843, "num_deletes": 250, "total_data_size": 3631973, "memory_usage": 3687368, "flush_reason": "Manual Compaction"}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631196036, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 2088700, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 10792, "largest_seqno": 12634, "table_properties": {"data_size": 2082724, "index_size": 2987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15116, "raw_average_key_size": 20, "raw_value_size": 2069613, "raw_average_value_size": 2755, "num_data_blocks": 133, "num_entries": 751, "num_filter_entries": 751, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358441, "oldest_key_time": 1772358441, "file_creation_time": 1772358631, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 9681 microseconds, and 4187 cpu microseconds.
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.196085) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 2088700 bytes OK
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.196109) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.198105) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.198122) EVENT_LOG_v1 {"time_micros": 1772358631198117, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.198143) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3624486, prev total WAL file size 3624486, number of live WAL files 2.
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.198875) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323531' seq:0, type:0; will stop at (end)
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(2039KB)], [26(13MB)]
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631198954, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 16516841, "oldest_snapshot_seqno": -1}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 4362 keys, 14546443 bytes, temperature: kUnknown
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631270934, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 14546443, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14513062, "index_size": 21367, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10949, "raw_key_size": 110109, "raw_average_key_size": 25, "raw_value_size": 14429237, "raw_average_value_size": 3307, "num_data_blocks": 917, "num_entries": 4362, "num_filter_entries": 4362, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772358631, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.271324) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 14546443 bytes
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.273036) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.8 rd, 201.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.0, 13.8 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(14.9) write-amplify(7.0) OK, records in: 4788, records dropped: 426 output_compression: NoCompression
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.273058) EVENT_LOG_v1 {"time_micros": 1772358631273049, "job": 10, "event": "compaction_finished", "compaction_time_micros": 72200, "compaction_time_cpu_micros": 35926, "output_level": 6, "num_output_files": 1, "total_output_size": 14546443, "num_input_records": 4788, "num_output_records": 4362, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631273602, "job": 10, "event": "table_file_deletion", "file_number": 28}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358631275564, "job": 10, "event": "table_file_deletion", "file_number": 26}
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.198701) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.275643) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.275648) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.275650) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.275651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:50:31.275653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:50:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:31 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0002420 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:31 np0005634532 python3.9[140157]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:32 np0005634532 python3.9[140282]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358631.1533113-326-93336030180848/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5e3121e0163bd6c4d4a61d4f57ce3b0d9d5c8127 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v234: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:50:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:32.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:50:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:50:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:32 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:50:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:32.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:32 np0005634532 python3.9[140436]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:32 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:33 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004250 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:33 np0005634532 python3.9[140560]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358632.3276622-326-213243009129715/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=e8ee0acb2a5ed109a6346b48eae0ff040945710f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:33 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d00016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:33 np0005634532 python3.9[140713]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v235: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:50:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:34.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:34 np0005634532 python3.9[140868]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:34.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:34 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0003130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:35 np0005634532 python3.9[141021]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:50:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:35 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:50:35 np0005634532 python3.9[141145]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358634.7213786-508-235780975363299/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0c16a1de4df09b9c17aa373fe94a26e845a77b53 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:36 np0005634532 python3.9[141299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v236: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:50:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:36.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:36.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:36 np0005634532 python3.9[141424]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358635.7722223-508-62859221166942/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=5e3121e0163bd6c4d4a61d4f57ce3b0d9d5c8127 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:36 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:36.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:50:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:36.987Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:50:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:36.988Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:50:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:37 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0003130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:37] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:37 np0005634532 python3.9[141577]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:37 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:37 np0005634532 python3.9[141701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358636.806331-508-205730064640564/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=96a006deb094a84107eb3443d7b296b9c96f0ffc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v237: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:50:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:38.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:38.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:38 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:50:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:38 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004270 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:39 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6d0002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:39 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6f0003130 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:39 np0005634532 python3.9[141856]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v238: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:50:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:40.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:40 np0005634532 python3.9[142011]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:40.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:40 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6fc003ea0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:50:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[128146]: 01/03/2026 09:50:41 : epoch 69a40b93 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff6e4004290 fd 39 proxy ignored for local
Mar  1 04:50:41 np0005634532 kernel: ganesha.nfsd[130492]: segfault at 50 ip 00007ff78404f32e sp 00007ff6ed7f9210 error 4 in libntirpc.so.5.8[7ff784034000+2c000] likely on CPU 3 (core 0, socket 3)
Mar  1 04:50:41 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:50:41 np0005634532 python3.9[142135]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358640.0628853-714-82821688512023/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:41 np0005634532 systemd[1]: Started Process Core Dump (PID 142136/UID 0).
Mar  1 04:50:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:41 np0005634532 python3.9[142290]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:41 np0005634532 systemd-coredump[142137]: Process 128159 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007ff78404f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007ff784059900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Mar  1 04:50:42 np0005634532 systemd[1]: systemd-coredump@2-142136-0.service: Deactivated successfully.
Mar  1 04:50:42 np0005634532 podman[142364]: 2026-03-01 09:50:42.091142684 +0000 UTC m=+0.043849551 container died 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:50:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8849cf12b3558376d34278131419d4d1dcf5316fc77becc44bf4275d36a322eb-merged.mount: Deactivated successfully.
Mar  1 04:50:42 np0005634532 podman[142364]: 2026-03-01 09:50:42.135207039 +0000 UTC m=+0.087913836 container remove 3953c0c94563a5cde5c5f30826f045fe1373de1dd7d98c784f28a80b246e4e0a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:50:42 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:50:42 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:50:42 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.296s CPU time.
Mar  1 04:50:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v239: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:50:42 np0005634532 python3.9[142483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:42.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:42.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:43 np0005634532 python3.9[142615]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358641.9891875-781-52059632139101/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:43 np0005634532 python3.9[142768]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:44 np0005634532 python3.9[142923]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v240: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:50:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:44.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:44.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095044 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:50:44 np0005634532 python3.9[143047]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358643.896244-853-232819888735710/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:45 np0005634532 python3.9[143200]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:46 np0005634532 python3.9[143354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v241: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:50:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:46.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:46.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:46.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:50:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095047 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:50:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:47] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:50:47 np0005634532 python3.9[143479]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358645.7216663-921-23396145971311/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:50:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:50:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:50:47 np0005634532 python3.9[143632]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:48 np0005634532 python3.9[143787]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v242: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:50:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:48.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:50:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:48.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:50:48 np0005634532 python3.9[143911]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358647.9779475-988-217741094408832/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:49 np0005634532 python3.9[144064]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:50:50 np0005634532 python3.9[144219]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:50:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v243: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:50:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:50.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:50.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:50 np0005634532 python3.9[144343]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358649.8789344-1059-186553626888325/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=e7087dcbd00c474c0b71f894339b789f0dd6e51a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:51 np0005634532 systemd[1]: session-48.scope: Deactivated successfully.
Mar  1 04:50:51 np0005634532 systemd[1]: session-48.scope: Consumed 21.059s CPU time.
Mar  1 04:50:51 np0005634532 systemd-logind[832]: Session 48 logged out. Waiting for processes to exit.
Mar  1 04:50:51 np0005634532 systemd-logind[832]: Removed session 48.
Mar  1 04:50:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v244: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:50:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:52.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 3.
Mar  1 04:50:52 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:50:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.296s CPU time.
Mar  1 04:50:52 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:50:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:52.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:52 np0005634532 podman[144445]: 2026-03-01 09:50:52.814379423 +0000 UTC m=+0.070861086 container create 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:50:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f347bb919fae3fb22042e6063f0707e029f8fd9bd2382821a8c79e9b5c0a6af3/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f347bb919fae3fb22042e6063f0707e029f8fd9bd2382821a8c79e9b5c0a6af3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f347bb919fae3fb22042e6063f0707e029f8fd9bd2382821a8c79e9b5c0a6af3/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f347bb919fae3fb22042e6063f0707e029f8fd9bd2382821a8c79e9b5c0a6af3/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:50:52 np0005634532 podman[144445]: 2026-03-01 09:50:52.776373287 +0000 UTC m=+0.032854750 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:50:52 np0005634532 podman[144445]: 2026-03-01 09:50:52.888256413 +0000 UTC m=+0.144737916 container init 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:50:52 np0005634532 podman[144445]: 2026-03-01 09:50:52.896532767 +0000 UTC m=+0.153014220 container start 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:50:52 np0005634532 bash[144445]: 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166
Mar  1 04:50:52 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:50:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:52 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:50:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:53 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:50:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v245: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 170 B/s wr, 1 op/s
Mar  1 04:50:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:54.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:54.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 04:50:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2722 writes, 12K keys, 2722 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.04 MB/s#012Cumulative WAL: 2722 writes, 2722 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2722 writes, 12K keys, 2722 commit groups, 1.0 writes per commit group, ingest: 24.89 MB, 0.04 MB/s#012Interval WAL: 2722 writes, 2722 syncs, 1.00 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    105.3      0.20              0.05         5    0.041       0      0       0.0       0.0#012  L6      1/0   13.87 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   2.5    143.0    125.3      0.42              0.12         4    0.105     16K   1783       0.0       0.0#012 Sum      1/0   13.87 MB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     96.5    118.8      0.62              0.16         9    0.069     16K   1783       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   3.5     97.0    119.3      0.62              0.16         8    0.078     16K   1783       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   0.0    143.0    125.3      0.42              0.12         4    0.105     16K   1783       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    106.6      0.20              0.05         4    0.050       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.021, interval 0.021#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d94b81350#2 capacity: 304.00 MB usage: 2.38 MB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 9.5e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(163,2.22 MB,0.730208%) FilterBlock(10,55.48 KB,0.0178237%) IndexBlock(10,111.23 KB,0.0357327%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Mar  1 04:50:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:50:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v246: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 596 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:50:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:56.501 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:56.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:50:56.989Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:50:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Mar  1 04:50:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:50:57] "GET /metrics HTTP/1.1" 200 48328 "" "Prometheus/2.51.0"
Mar  1 04:50:58 np0005634532 systemd-logind[832]: New session 49 of user zuul.
Mar  1 04:50:58 np0005634532 systemd[1]: Started Session 49 of User zuul.
Mar  1 04:50:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v247: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:50:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:50:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:50:58.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:50:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:50:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:50:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:50:58.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:50:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:59 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:50:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:50:59 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:50:59 np0005634532 python3.9[144664]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:50:59 np0005634532 python3.9[144819]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v248: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:51:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:00.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:51:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:00.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:51:00 np0005634532 python3.9[144945]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358659.3492496-57-258898422756504/.source.conf _original_basename=ceph.conf follow=False checksum=8f00bbe2e76cca8a3eadd6c31ceeb65f407c7dff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:01 np0005634532 python3.9[145098]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:01 np0005634532 python3.9[145222]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358660.8743508-57-56414197318212/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=735ad2809d0818ba20e2faa55e343c8cd4b2faa0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:02 np0005634532 systemd[1]: session-49.scope: Deactivated successfully.
Mar  1 04:51:02 np0005634532 systemd[1]: session-49.scope: Consumed 2.522s CPU time.
Mar  1 04:51:02 np0005634532 systemd-logind[832]: Session 49 logged out. Waiting for processes to exit.
Mar  1 04:51:02 np0005634532 systemd-logind[832]: Removed session 49.
Mar  1 04:51:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v249: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:51:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:02.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.577872) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662577922, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 498, "num_deletes": 251, "total_data_size": 562607, "memory_usage": 571864, "flush_reason": "Manual Compaction"}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662583054, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 556969, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 12636, "largest_seqno": 13132, "table_properties": {"data_size": 554226, "index_size": 779, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6234, "raw_average_key_size": 18, "raw_value_size": 548834, "raw_average_value_size": 1586, "num_data_blocks": 35, "num_entries": 346, "num_filter_entries": 346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358631, "oldest_key_time": 1772358631, "file_creation_time": 1772358662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 5230 microseconds, and 2231 cpu microseconds.
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.583106) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 556969 bytes OK
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.583131) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.584764) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.584839) EVENT_LOG_v1 {"time_micros": 1772358662584826, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.584873) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 559770, prev total WAL file size 559770, number of live WAL files 2.
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.585695) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(543KB)], [29(13MB)]
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662585780, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 15103412, "oldest_snapshot_seqno": -1}
Mar  1 04:51:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:02.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4198 keys, 12364093 bytes, temperature: kUnknown
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662663038, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 12364093, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12333496, "index_size": 19031, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 107591, "raw_average_key_size": 25, "raw_value_size": 12254163, "raw_average_value_size": 2919, "num_data_blocks": 805, "num_entries": 4198, "num_filter_entries": 4198, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772358662, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.663397) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 12364093 bytes
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.665262) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 195.2 rd, 159.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 13.9 +0.0 blob) out(11.8 +0.0 blob), read-write-amplify(49.3) write-amplify(22.2) OK, records in: 4708, records dropped: 510 output_compression: NoCompression
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.665292) EVENT_LOG_v1 {"time_micros": 1772358662665276, "job": 12, "event": "compaction_finished", "compaction_time_micros": 77383, "compaction_time_cpu_micros": 32093, "output_level": 6, "num_output_files": 1, "total_output_size": 12364093, "num_input_records": 4708, "num_output_records": 4198, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662665611, "job": 12, "event": "table_file_deletion", "file_number": 31}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772358662668295, "job": 12, "event": "table_file_deletion", "file_number": 29}
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.585569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.668427) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.668435) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.668438) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.668440) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:51:02.668442) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:51:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v250: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:51:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:04.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:04.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:51:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:05 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6928000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v251: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:51:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:51:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:06.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:51:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:06.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:06 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6920001d80 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:06.990Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:51:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:06.991Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:51:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:07] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:07] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:07 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:07 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:07 np0005634532 systemd-logind[832]: New session 50 of user zuul.
Mar  1 04:51:07 np0005634532 systemd[1]: Started Session 50 of User zuul.
Mar  1 04:51:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v252: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:51:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:08.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:08.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:08 np0005634532 python3.9[145425]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:51:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:08 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095109 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:51:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:09 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:09 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:09 np0005634532 python3.9[145583]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v253: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:51:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:10.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:10 np0005634532 python3.9[145737]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:10.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:10 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:11 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:11 np0005634532 python3.9[145910]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:51:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:11 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:12 np0005634532 python3.9[146069]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Mar  1 04:51:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v254: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:51:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:12.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:12 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:13 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:13 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v255: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:51:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:14.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:14 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Mar  1 04:51:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:14.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:14 np0005634532 python3.9[146229]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:51:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:14 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:15 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69080016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:15 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:15 np0005634532 python3.9[146314]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:51:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v256: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:51:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:16.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:16.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:16.992Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:51:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:16 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:17] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:17] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:17 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:51:17
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'default.rgw.meta', '.nfs', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes']
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:51:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:51:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:17 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:51:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:51:17 np0005634532 python3.9[146470]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:51:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v257: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:51:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:18.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:18.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:18 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:19 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:19 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:19 np0005634532 python3[146630]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Mar  1 04:51:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v258: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:51:20 np0005634532 python3.9[146785]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:20.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:20.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:21 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:21 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:21 np0005634532 python3.9[146938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:21 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:21 np0005634532 python3.9[147017]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:22 np0005634532 python3.9[147172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v259: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:22.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:22.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:22 np0005634532 python3.9[147251]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.k426gukp recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:23 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:23 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:23 np0005634532 python3.9[147404]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:23 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:23 np0005634532 python3.9[147483]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v260: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:51:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:24.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:24.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:24 np0005634532 python3.9[147638]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:25 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:25 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:25 np0005634532 python3[147792]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Mar  1 04:51:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:25 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:26 np0005634532 python3.9[147995]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v261: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:26.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:26.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:26 np0005634532 python3.9[148193]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358685.7256153-426-122843964393476/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:51:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:51:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:26.993Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:51:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:27 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:51:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:27] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:51:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:27 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.479841175 +0000 UTC m=+0.054599180 container create afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Mar  1 04:51:27 np0005634532 systemd[1]: Started libpod-conmon-afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6.scope.
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.451947541 +0000 UTC m=+0.026705526 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:27 np0005634532 python3.9[148461]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.583836923 +0000 UTC m=+0.158594938 container init afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.590902419 +0000 UTC m=+0.165660384 container start afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.594680293 +0000 UTC m=+0.169438338 container attach afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:51:27 np0005634532 kind_bose[148485]: 167 167
Mar  1 04:51:27 np0005634532 systemd[1]: libpod-afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6.scope: Deactivated successfully.
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.596333894 +0000 UTC m=+0.171091859 container died afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:51:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-12d5d9e1abb702cfac38965f430d552bc10d1cdb8798689afc0275859d6e8ac8-merged.mount: Deactivated successfully.
Mar  1 04:51:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:27 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:27 np0005634532 podman[148469]: 2026-03-01 09:51:27.640946725 +0000 UTC m=+0.215704690 container remove afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_bose, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:51:27 np0005634532 systemd[1]: libpod-conmon-afa77bdfaaaffcdceb2d05a1fe539aa43bfe96499a76134436d00fd25f21bff6.scope: Deactivated successfully.
Mar  1 04:51:27 np0005634532 podman[148561]: 2026-03-01 09:51:27.754509301 +0000 UTC m=+0.041099334 container create ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:51:27 np0005634532 systemd[1]: Started libpod-conmon-ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481.scope.
Mar  1 04:51:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:27 np0005634532 podman[148561]: 2026-03-01 09:51:27.737371435 +0000 UTC m=+0.023961448 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:27 np0005634532 podman[148561]: 2026-03-01 09:51:27.845397373 +0000 UTC m=+0.131987436 container init ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:51:27 np0005634532 podman[148561]: 2026-03-01 09:51:27.852812048 +0000 UTC m=+0.139402081 container start ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:51:27 np0005634532 podman[148561]: 2026-03-01 09:51:27.856483679 +0000 UTC m=+0.143073712 container attach ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:51:28 np0005634532 python3.9[148659]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358687.040458-471-72985652230529/.source.nft follow=False _original_basename=jump-chain.j2 checksum=ac8dea350c18f51f54d48dacc09613cda4c5540c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:28 np0005634532 optimistic_mayer[148601]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:51:28 np0005634532 optimistic_mayer[148601]: --> All data devices are unavailable
Mar  1 04:51:28 np0005634532 systemd[1]: libpod-ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481.scope: Deactivated successfully.
Mar  1 04:51:28 np0005634532 podman[148695]: 2026-03-01 09:51:28.210182342 +0000 UTC m=+0.025940936 container died ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:51:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9ef7be6a7e4a0119c40de297eb9bde46a4a1f418e8c2e02da22a2fa73fd1969f-merged.mount: Deactivated successfully.
Mar  1 04:51:28 np0005634532 podman[148695]: 2026-03-01 09:51:28.246300431 +0000 UTC m=+0.062059025 container remove ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mayer, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:51:28 np0005634532 systemd[1]: libpod-conmon-ac30ea9347b9cd123fd75dcc033d3380c5fbaaa1d7469156fda498061bb59481.scope: Deactivated successfully.
Mar  1 04:51:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v262: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:28.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:28.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:28 np0005634532 python3.9[148890]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.736113412 +0000 UTC m=+0.037100684 container create e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:51:28 np0005634532 systemd[1]: Started libpod-conmon-e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55.scope.
Mar  1 04:51:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.80310066 +0000 UTC m=+0.104088012 container init e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.809892259 +0000 UTC m=+0.110879531 container start e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.813688413 +0000 UTC m=+0.114675775 container attach e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:51:28 np0005634532 reverent_yalow[148973]: 167 167
Mar  1 04:51:28 np0005634532 systemd[1]: libpod-e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55.scope: Deactivated successfully.
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.815515909 +0000 UTC m=+0.116503181 container died e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.719453988 +0000 UTC m=+0.020441290 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b775c786f6a1c51bcab8e84aba92367f0a0142ef425515c6566a35fdfe5f4df0-merged.mount: Deactivated successfully.
Mar  1 04:51:28 np0005634532 podman[148931]: 2026-03-01 09:51:28.850975861 +0000 UTC m=+0.151963133 container remove e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:51:28 np0005634532 systemd[1]: libpod-conmon-e94f610cd38c4f1f643816437a66d817b86de5c7ede63de81299efa639a79d55.scope: Deactivated successfully.
Mar  1 04:51:28 np0005634532 podman[149067]: 2026-03-01 09:51:28.974118515 +0000 UTC m=+0.045601675 container create ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:51:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:29 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:29 np0005634532 systemd[1]: Started libpod-conmon-ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7.scope.
Mar  1 04:51:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243710681cd6fdbff276d002569d5eaa16d8c2b33f0aab301c17e772d7d251d8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243710681cd6fdbff276d002569d5eaa16d8c2b33f0aab301c17e772d7d251d8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243710681cd6fdbff276d002569d5eaa16d8c2b33f0aab301c17e772d7d251d8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/243710681cd6fdbff276d002569d5eaa16d8c2b33f0aab301c17e772d7d251d8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:28.956269132 +0000 UTC m=+0.027752322 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:29.060388292 +0000 UTC m=+0.131871512 container init ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:29.071289944 +0000 UTC m=+0.142773154 container start ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 04:51:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:29 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:29.080905613 +0000 UTC m=+0.152388833 container attach ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:51:29 np0005634532 python3.9[149118]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358688.3216286-516-274487793928759/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]: {
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:    "0": [
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:        {
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "devices": [
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "/dev/loop3"
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            ],
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "lv_name": "ceph_lv0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "lv_size": "21470642176",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "name": "ceph_lv0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "tags": {
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.cluster_name": "ceph",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.crush_device_class": "",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.encrypted": "0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.osd_id": "0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.type": "block",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.vdo": "0",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:                "ceph.with_tpm": "0"
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            },
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "type": "block",
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:            "vg_name": "ceph_vg0"
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:        }
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]:    ]
Mar  1 04:51:29 np0005634532 flamboyant_carver[149115]: }
Mar  1 04:51:29 np0005634532 systemd[1]: libpod-ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7.scope: Deactivated successfully.
Mar  1 04:51:29 np0005634532 conmon[149115]: conmon ae6649f709291fb13680 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7.scope/container/memory.events
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:29.415113141 +0000 UTC m=+0.486596321 container died ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:51:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-243710681cd6fdbff276d002569d5eaa16d8c2b33f0aab301c17e772d7d251d8-merged.mount: Deactivated successfully.
Mar  1 04:51:29 np0005634532 podman[149067]: 2026-03-01 09:51:29.460448339 +0000 UTC m=+0.531931529 container remove ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 04:51:29 np0005634532 systemd[1]: libpod-conmon-ae6649f709291fb136800e363c89f061a842b5281b56aaa41c0b415a98a3aeb7.scope: Deactivated successfully.
Mar  1 04:51:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:29 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.03063351 +0000 UTC m=+0.050265162 container create eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:51:30 np0005634532 systemd[1]: Started libpod-conmon-eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8.scope.
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.006745016 +0000 UTC m=+0.026376648 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.131499121 +0000 UTC m=+0.151130763 container init eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.140820583 +0000 UTC m=+0.160452135 container start eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.144571276 +0000 UTC m=+0.164202858 container attach eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:51:30 np0005634532 adoring_hermann[149371]: 167 167
Mar  1 04:51:30 np0005634532 systemd[1]: libpod-eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8.scope: Deactivated successfully.
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.149773736 +0000 UTC m=+0.169405308 container died eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:51:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2fb28027b8c80e53a1d3e19f7afe1a918e132546b130855d97010b87854a3ec4-merged.mount: Deactivated successfully.
Mar  1 04:51:30 np0005634532 podman[149353]: 2026-03-01 09:51:30.202447507 +0000 UTC m=+0.222079089 container remove eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_hermann, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:51:30 np0005634532 systemd[1]: libpod-conmon-eb196ef5c09e5395f9609d6978fd47da42f98a32e3d7a9da3a37c1cfee9fc7a8.scope: Deactivated successfully.
Mar  1 04:51:30 np0005634532 python3.9[149419]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:30 np0005634532 podman[149427]: 2026-03-01 09:51:30.373140705 +0000 UTC m=+0.046403796 container create 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:51:30 np0005634532 systemd[1]: Started libpod-conmon-34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669.scope.
Mar  1 04:51:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:51:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbce94662a4a88ed594ca527606a6cd9762a30acb8cb646660ab06ee1ea618/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbce94662a4a88ed594ca527606a6cd9762a30acb8cb646660ab06ee1ea618/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbce94662a4a88ed594ca527606a6cd9762a30acb8cb646660ab06ee1ea618/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00cbce94662a4a88ed594ca527606a6cd9762a30acb8cb646660ab06ee1ea618/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:51:30 np0005634532 podman[149427]: 2026-03-01 09:51:30.348134623 +0000 UTC m=+0.021397694 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:51:30 np0005634532 podman[149427]: 2026-03-01 09:51:30.456644723 +0000 UTC m=+0.129907784 container init 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:51:30 np0005634532 podman[149427]: 2026-03-01 09:51:30.461551125 +0000 UTC m=+0.134814186 container start 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:51:30 np0005634532 podman[149427]: 2026-03-01 09:51:30.465076633 +0000 UTC m=+0.138339684 container attach 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:51:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v263: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:51:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:30.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:51:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:30.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:51:30 np0005634532 python3.9[149587]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358689.7075613-561-197707711752106/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:31 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:31 np0005634532 lvm[149667]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:51:31 np0005634532 lvm[149667]: VG ceph_vg0 finished
Mar  1 04:51:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:31 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:31 np0005634532 vibrant_vaughan[149445]: {}
Mar  1 04:51:31 np0005634532 systemd[1]: libpod-34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669.scope: Deactivated successfully.
Mar  1 04:51:31 np0005634532 podman[149427]: 2026-03-01 09:51:31.142871643 +0000 UTC m=+0.816134724 container died 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:51:31 np0005634532 systemd[1]: var-lib-containers-storage-overlay-00cbce94662a4a88ed594ca527606a6cd9762a30acb8cb646660ab06ee1ea618-merged.mount: Deactivated successfully.
Mar  1 04:51:31 np0005634532 podman[149427]: 2026-03-01 09:51:31.192418976 +0000 UTC m=+0.865682027 container remove 34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_vaughan, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:31 np0005634532 systemd[1]: libpod-conmon-34eda37304e990838898708f8fac7ed84a3f3cbc8496d4f56a04aa374bb97669.scope: Deactivated successfully.
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:31 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:51:31 np0005634532 python3.9[149862]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:31 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Mar  1 04:51:32 np0005634532 python3.9[149989]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358691.1032329-606-49343888203661/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v264: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:51:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:51:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:32.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:32 np0005634532 python3.9[150143]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:33 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:33 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:33 np0005634532 python3.9[150296]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:33 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v265: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:51:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:34.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:34 np0005634532 python3.9[150456]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:34.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:35 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:35 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f690c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:35 np0005634532 python3.9[150609]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:35 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:35 np0005634532 python3.9[150763]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:51:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v266: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:36.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:36 np0005634532 python3.9[150920]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:36.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:36.996Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:51:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:37 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69200028a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:37] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:37] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:37 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:37 np0005634532 python3.9[151080]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:37 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6900000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:38 np0005634532 python3.9[151232]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:51:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v267: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:38.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:38.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:39 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:39 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6928000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:39 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:39 np0005634532 python3.9[151386]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:e0:eb:c4:a5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:39 np0005634532 ovs-vsctl[151388]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:e0:eb:c4:a5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Mar  1 04:51:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v268: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:51:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:40.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:40 np0005634532 python3.9[151542]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:40.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:41 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:41 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:41 np0005634532 python3.9[151698]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:51:41 np0005634532 ovs-vsctl[151699]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Mar  1 04:51:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:41 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6928000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:42 np0005634532 python3.9[151851]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:51:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v269: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:42.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:42.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:43 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:43 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:43 np0005634532 python3.9[152006]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:43 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:44 np0005634532 python3.9[152160]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v270: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:51:44 np0005634532 python3.9[152240]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:44.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:44.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:45 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6928000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:45 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:45 np0005634532 python3.9[152393]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:45 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69000016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:45 np0005634532 python3.9[152472]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:46 np0005634532 python3.9[152627]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v271: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:51:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:46.575 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:46.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:51:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:46.997Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:51:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:46.997Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:51:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:47 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:47] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:47] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:51:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:47 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6928000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:47 np0005634532 python3.9[152782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:51:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:47 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:51:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:51:47 np0005634532 python3.9[152861]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:48 np0005634532 python3.9[153016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095148 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:51:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v272: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:51:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:48.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:51:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:48.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:51:48 np0005634532 python3.9[153095]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:49 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:49 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6908003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:49 np0005634532 python3.9[153248]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:51:49 np0005634532 systemd[1]: Reloading.
Mar  1 04:51:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:49 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69280091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:49 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:51:49 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:51:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v273: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:50.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:50 np0005634532 python3.9[153447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:50.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:51 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f69280091b0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:51:51 np0005634532 python3.9[153526]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:51 np0005634532 kernel: ganesha.nfsd[145265]: segfault at 50 ip 00007f69b268932e sp 00007f691bffe210 error 4 in libntirpc.so.5.8[7f69b266e000+2c000] likely on CPU 4 (core 0, socket 4)
Mar  1 04:51:51 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:51:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[144460]: 01/03/2026 09:51:51 : epoch 69a40bfc : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f6910004050 fd 39 proxy ignored for local
Mar  1 04:51:51 np0005634532 systemd[1]: Started Process Core Dump (PID 153527/UID 0).
Mar  1 04:51:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:51 np0005634532 python3.9[153706]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:52 np0005634532 systemd-coredump[153529]: Process 144464 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 52:#012#0  0x00007f69b268932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 04:51:52 np0005634532 systemd[1]: systemd-coredump@3-153527-0.service: Deactivated successfully.
Mar  1 04:51:52 np0005634532 podman[153792]: 2026-03-01 09:51:52.130132046 +0000 UTC m=+0.031442783 container died 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 04:51:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f347bb919fae3fb22042e6063f0707e029f8fd9bd2382821a8c79e9b5c0a6af3-merged.mount: Deactivated successfully.
Mar  1 04:51:52 np0005634532 podman[153792]: 2026-03-01 09:51:52.165797074 +0000 UTC m=+0.067107781 container remove 3eadbc9629d082dd6fe26a9fea753d10da2f6d410b3e6589ecd326bd65cbc166 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:51:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:51:52 np0005634532 python3.9[153787]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:51:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.292s CPU time.
Mar  1 04:51:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v274: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:51:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:52.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:52.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:53 np0005634532 python3.9[153987]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:51:53 np0005634532 systemd[1]: Reloading.
Mar  1 04:51:53 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:51:53 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:51:53 np0005634532 systemd[1]: Starting Create netns directory...
Mar  1 04:51:53 np0005634532 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Mar  1 04:51:53 np0005634532 systemd[1]: netns-placeholder.service: Deactivated successfully.
Mar  1 04:51:53 np0005634532 systemd[1]: Finished Create netns directory.
Mar  1 04:51:54 np0005634532 python3.9[154189]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v275: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:51:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:51:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:54.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:51:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:51:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:54.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:51:54 np0005634532 python3.9[154343]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:55 np0005634532 python3.9[154467]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358714.490452-1362-256682716889893/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:51:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v276: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:51:56 np0005634532 python3.9[154622]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:56.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:56.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:56.998Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:51:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:51:56.999Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:51:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:51:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:51:57] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:51:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095157 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:51:57 np0005634532 python3.9[154775]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:51:57 np0005634532 python3.9[154928]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:51:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v277: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:51:58 np0005634532 python3.9[155054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358717.4766717-1461-107061725872685/.source.json _original_basename=.cxfm37pr follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:51:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:51:58.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:51:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:51:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:51:58.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:51:59 np0005634532 python3.9[155204]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v278: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:52:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:52:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:00.592 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:52:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:00.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:01 np0005634532 python3.9[155632]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Mar  1 04:52:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v279: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:52:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:52:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:52:02 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 4.
Mar  1 04:52:02 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:52:02 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.292s CPU time.
Mar  1 04:52:02 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:52:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:02.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:02.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:02 np0005634532 podman[155835]: 2026-03-01 09:52:02.807318945 +0000 UTC m=+0.057157143 container create 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:52:02 np0005634532 python3.9[155801]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Mar  1 04:52:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90646d1a555101af477d779c00a7816a5ccf58df0c82fe6e5b2fc3fae8546d53/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90646d1a555101af477d779c00a7816a5ccf58df0c82fe6e5b2fc3fae8546d53/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90646d1a555101af477d779c00a7816a5ccf58df0c82fe6e5b2fc3fae8546d53/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90646d1a555101af477d779c00a7816a5ccf58df0c82fe6e5b2fc3fae8546d53/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:02 np0005634532 podman[155835]: 2026-03-01 09:52:02.778861577 +0000 UTC m=+0.028699875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:02 np0005634532 podman[155835]: 2026-03-01 09:52:02.883277706 +0000 UTC m=+0.133115924 container init 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:52:02 np0005634532 podman[155835]: 2026-03-01 09:52:02.887638644 +0000 UTC m=+0.137476842 container start 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:52:02 np0005634532 bash[155835]: 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:52:02 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:52:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:02 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:52:03 np0005634532 python3[156044]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Mar  1 04:52:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v280: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:52:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:52:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:04.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:52:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:04.661 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v281: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:52:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:06.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:52:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:06.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:52:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:07.001Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:52:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:52:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:52:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v282: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:52:08 np0005634532 podman[156059]: 2026-03-01 09:52:08.503462104 +0000 UTC m=+4.538571179 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe
Mar  1 04:52:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:08.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:08 np0005634532 podman[156187]: 2026-03-01 09:52:08.660020891 +0000 UTC m=+0.049653847 container create eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, config_id=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Mar  1 04:52:08 np0005634532 podman[156187]: 2026-03-01 09:52:08.632011163 +0000 UTC m=+0.021644149 image pull ce6781f051bf092c13d84cb587c56ad7edaa58b70fcc0effc1dff15724d5232e quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe
Mar  1 04:52:08 np0005634532 python3[156044]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954 --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe
Mar  1 04:52:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:08.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:52:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:09 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:52:10 np0005634532 python3.9[156379]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:52:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v283: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Mar  1 04:52:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:10.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:10.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:10 np0005634532 python3.9[156535]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:11 np0005634532 python3.9[156612]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:52:12 np0005634532 python3.9[156790]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358731.4439614-1695-142465973545175/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095212 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:52:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v284: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 767 B/s wr, 2 op/s
Mar  1 04:52:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:12.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:12 np0005634532 python3.9[156868]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:52:12 np0005634532 systemd[1]: Reloading.
Mar  1 04:52:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:12.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:12 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:52:12 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:52:13 np0005634532 python3.9[156992]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:52:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v285: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Mar  1 04:52:14 np0005634532 systemd[1]: Reloading.
Mar  1 04:52:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000050s ======
Mar  1 04:52:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:14.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Mar  1 04:52:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:14.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:14 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:52:14 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:52:14 np0005634532 systemd[1]: Starting ovn_controller container...
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-000000000000000e:nfs.cephfs.2: -2
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:52:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e95f63563cdb6d6c572224d03f7f0f471cb4d4676d1abb25d38b4b94dcf79bd9/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:15 np0005634532 systemd[1]: Started /usr/bin/podman healthcheck run eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d.
Mar  1 04:52:15 np0005634532 podman[157042]: 2026-03-01 09:52:15.478646953 +0000 UTC m=+0.543182557 container init eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223)
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + sudo -E kolla_set_configs
Mar  1 04:52:15 np0005634532 podman[157042]: 2026-03-01 09:52:15.513156956 +0000 UTC m=+0.577692520 container start eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 04:52:15 np0005634532 edpm-start-podman-container[157042]: ovn_controller
Mar  1 04:52:15 np0005634532 systemd[1]: Created slice User Slice of UID 0.
Mar  1 04:52:15 np0005634532 systemd[1]: Starting User Runtime Directory /run/user/0...
Mar  1 04:52:15 np0005634532 systemd[1]: Finished User Runtime Directory /run/user/0.
Mar  1 04:52:15 np0005634532 systemd[1]: Starting User Manager for UID 0...
Mar  1 04:52:15 np0005634532 edpm-start-podman-container[157041]: Creating additional drop-in dependency for "ovn_controller" (eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d)
Mar  1 04:52:15 np0005634532 podman[157089]: 2026-03-01 09:52:15.585238031 +0000 UTC m=+0.059569779 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 04:52:15 np0005634532 systemd[1]: eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d-9f41dc1070d6d33.service: Main process exited, code=exited, status=1/FAILURE
Mar  1 04:52:15 np0005634532 systemd[1]: eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d-9f41dc1070d6d33.service: Failed with result 'exit-code'.
Mar  1 04:52:15 np0005634532 systemd[1]: Reloading.
Mar  1 04:52:15 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:52:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:15 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01e0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:15 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:52:15 np0005634532 systemd[157113]: Queued start job for default target Main User Target.
Mar  1 04:52:15 np0005634532 systemd[157113]: Created slice User Application Slice.
Mar  1 04:52:15 np0005634532 systemd[157113]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Mar  1 04:52:15 np0005634532 systemd[157113]: Started Daily Cleanup of User's Temporary Directories.
Mar  1 04:52:15 np0005634532 systemd[157113]: Reached target Paths.
Mar  1 04:52:15 np0005634532 systemd[157113]: Reached target Timers.
Mar  1 04:52:15 np0005634532 systemd[157113]: Starting D-Bus User Message Bus Socket...
Mar  1 04:52:15 np0005634532 systemd[157113]: Starting Create User's Volatile Files and Directories...
Mar  1 04:52:15 np0005634532 systemd[157113]: Finished Create User's Volatile Files and Directories.
Mar  1 04:52:15 np0005634532 systemd[157113]: Listening on D-Bus User Message Bus Socket.
Mar  1 04:52:15 np0005634532 systemd[157113]: Reached target Sockets.
Mar  1 04:52:15 np0005634532 systemd[157113]: Reached target Basic System.
Mar  1 04:52:15 np0005634532 systemd[157113]: Reached target Main User Target.
Mar  1 04:52:15 np0005634532 systemd[157113]: Startup finished in 131ms.
Mar  1 04:52:15 np0005634532 systemd[1]: Started User Manager for UID 0.
Mar  1 04:52:15 np0005634532 systemd[1]: Started ovn_controller container.
Mar  1 04:52:15 np0005634532 systemd[1]: Started Session c1 of User root.
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: INFO:__main__:Validating config file
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: INFO:__main__:Writing out command to execute
Mar  1 04:52:15 np0005634532 systemd[1]: session-c1.scope: Deactivated successfully.
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: ++ cat /run_command
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + ARGS=
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + sudo kolla_copy_cacerts
Mar  1 04:52:15 np0005634532 systemd[1]: Started Session c2 of User root.
Mar  1 04:52:15 np0005634532 systemd[1]: session-c2.scope: Deactivated successfully.
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + [[ ! -n '' ]]
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + . kolla_extend_start
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + umask 0022
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:15Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Mar  1 04:52:15 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:15Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.0164] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.0175] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <warn>  [1772358736.0179] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.0192] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.0202] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.0208] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Mar  1 04:52:16 np0005634532 kernel: br-int: entered promiscuous mode
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00014|main|INFO|OVS feature set changed, force recompute.
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Mar  1 04:52:16 np0005634532 systemd-udevd[157226]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00022|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00023|main|INFO|OVS feature set changed, force recompute.
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Mar  1 04:52:16 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:16Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.1059] manager: (ovn-dde279-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.1068] manager: (ovn-25551c-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/20)
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.1074] manager: (ovn-056888-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/21)
Mar  1 04:52:16 np0005634532 kernel: genev_sys_6081: entered promiscuous mode
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.1235] device (genev_sys_6081): carrier: link connected
Mar  1 04:52:16 np0005634532 NetworkManager[49996]: <info>  [1772358736.1240] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/22)
Mar  1 04:52:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v286: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:52:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:16.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:16.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:16 np0005634532 python3.9[157358]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Mar  1 04:52:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:17.003Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:52:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:52:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:17 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:17 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:52:17
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.nfs', 'vms', 'cephfs.cephfs.meta', 'default.rgw.meta', '.rgw.root', 'images', '.mgr', 'default.rgw.log', 'backups', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control']
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:52:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:52:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:52:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:17 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:52:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:52:17 np0005634532 python3.9[157511]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:18 np0005634532 python3.9[157637]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358737.351731-1830-215195147668504/.source.yaml _original_basename=.lf9unosp follow=False checksum=748b0483f707b3d30a2d455a39b7e147e39bf471 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v287: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:52:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:18.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:18.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:19 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:19 np0005634532 python3.9[157790]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:52:19 np0005634532 ovs-vsctl[157791]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Mar  1 04:52:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095219 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:52:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:19 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:19 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:19 np0005634532 python3.9[157944]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:52:19 np0005634532 ovs-vsctl[157947]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Mar  1 04:52:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 04:52:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8024 writes, 33K keys, 8024 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 8024 writes, 1544 syncs, 5.20 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8024 writes, 33K keys, 8024 commit groups, 1.0 writes per commit group, ingest: 20.93 MB, 0.03 MB/s#012Interval WAL: 8024 writes, 1544 syncs, 5.20 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Mar  1 04:52:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v288: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:52:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:20.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:20.674 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:20 np0005634532 python3.9[158102]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:52:20 np0005634532 ovs-vsctl[158103]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Mar  1 04:52:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:21 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:21 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:21 np0005634532 systemd[1]: session-50.scope: Deactivated successfully.
Mar  1 04:52:21 np0005634532 systemd[1]: session-50.scope: Consumed 55.788s CPU time.
Mar  1 04:52:21 np0005634532 systemd-logind[832]: Session 50 logged out. Waiting for processes to exit.
Mar  1 04:52:21 np0005634532 systemd-logind[832]: Removed session 50.
Mar  1 04:52:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:21 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v289: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:52:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:22.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:22.676 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:23 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:23 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:23 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v290: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:52:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:24.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:24.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:25 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:25 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b80016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:25 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:26 np0005634532 systemd[1]: Stopping User Manager for UID 0...
Mar  1 04:52:26 np0005634532 systemd[157113]: Activating special unit Exit the Session...
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped target Main User Target.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped target Basic System.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped target Paths.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped target Sockets.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped target Timers.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped Daily Cleanup of User's Temporary Directories.
Mar  1 04:52:26 np0005634532 systemd[157113]: Closed D-Bus User Message Bus Socket.
Mar  1 04:52:26 np0005634532 systemd[157113]: Stopped Create User's Volatile Files and Directories.
Mar  1 04:52:26 np0005634532 systemd[157113]: Removed slice User Application Slice.
Mar  1 04:52:26 np0005634532 systemd[157113]: Reached target Shutdown.
Mar  1 04:52:26 np0005634532 systemd[157113]: Finished Exit the Session.
Mar  1 04:52:26 np0005634532 systemd[157113]: Reached target Exit the Session.
Mar  1 04:52:26 np0005634532 systemd[1]: user@0.service: Deactivated successfully.
Mar  1 04:52:26 np0005634532 systemd[1]: Stopped User Manager for UID 0.
Mar  1 04:52:26 np0005634532 systemd[1]: Stopping User Runtime Directory /run/user/0...
Mar  1 04:52:26 np0005634532 systemd[1]: run-user-0.mount: Deactivated successfully.
Mar  1 04:52:26 np0005634532 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Mar  1 04:52:26 np0005634532 systemd[1]: Stopped User Runtime Directory /run/user/0.
Mar  1 04:52:26 np0005634532 systemd[1]: Removed slice User Slice of UID 0.
Mar  1 04:52:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:26 np0005634532 systemd-logind[832]: New session 52 of user zuul.
Mar  1 04:52:26 np0005634532 systemd[1]: Started Session 52 of User zuul.
Mar  1 04:52:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v291: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:52:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:26.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:26.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:27.004Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:27.005Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:52:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:27 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:27 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:27 np0005634532 python3.9[158290]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:52:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:27 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095228 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:52:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v292: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:52:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:28.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:28.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:28 np0005634532 python3.9[158449]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:29 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:29 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:29 np0005634532 python3.9[158602]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:29 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:30 np0005634532 python3.9[158756]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v293: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:52:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:30.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:30.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:30 np0005634532 python3.9[158910]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:31 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:31 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:31 np0005634532 python3.9[159063]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:31 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 python3.9[159306]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:52:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v294: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:52:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:32.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:32.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:52:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:52:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:33 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:33 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:33 np0005634532 python3.9[159524]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.36165145 +0000 UTC m=+0.064562756 container create 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:52:33 np0005634532 systemd[1]: Started libpod-conmon-5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc.scope.
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.324988102 +0000 UTC m=+0.027899458 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.477247927 +0000 UTC m=+0.180159243 container init 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.48724774 +0000 UTC m=+0.190159056 container start 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.492902783 +0000 UTC m=+0.195814099 container attach 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:52:33 np0005634532 awesome_grothendieck[159584]: 167 167
Mar  1 04:52:33 np0005634532 systemd[1]: libpod-5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc.scope: Deactivated successfully.
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.496795662 +0000 UTC m=+0.199706998 container died 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:52:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-685113fbaf9d178ad509ff99ab4902e8817aadb08120f0bf1f558ebf457bf90e-merged.mount: Deactivated successfully.
Mar  1 04:52:33 np0005634532 podman[159567]: 2026-03-01 09:52:33.561981272 +0000 UTC m=+0.264892548 container remove 5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:52:33 np0005634532 systemd[1]: libpod-conmon-5eaf22c16424adfb2e515188b53635d8cb5d70580356185ecac1ba368bc6a8fc.scope: Deactivated successfully.
Mar  1 04:52:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:33 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:33 np0005634532 podman[159610]: 2026-03-01 09:52:33.737883107 +0000 UTC m=+0.074513988 container create cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:52:33 np0005634532 podman[159610]: 2026-03-01 09:52:33.68823434 +0000 UTC m=+0.024865271 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:33 np0005634532 systemd[1]: Started libpod-conmon-cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79.scope.
Mar  1 04:52:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:33 np0005634532 podman[159610]: 2026-03-01 09:52:33.853608068 +0000 UTC m=+0.190238909 container init cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:52:33 np0005634532 podman[159610]: 2026-03-01 09:52:33.866490554 +0000 UTC m=+0.203121425 container start cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:52:33 np0005634532 podman[159610]: 2026-03-01 09:52:33.873185663 +0000 UTC m=+0.209816534 container attach cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Mar  1 04:52:34 np0005634532 determined_hawking[159629]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:52:34 np0005634532 determined_hawking[159629]: --> All data devices are unavailable
Mar  1 04:52:34 np0005634532 systemd[1]: libpod-cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79.scope: Deactivated successfully.
Mar  1 04:52:34 np0005634532 podman[159610]: 2026-03-01 09:52:34.26675466 +0000 UTC m=+0.603385501 container died cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 04:52:34 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b44b70e03766a4ff5c666b0eedf968c4e7469f0d5e92c437e8a2edd4b4b4059d-merged.mount: Deactivated successfully.
Mar  1 04:52:34 np0005634532 podman[159610]: 2026-03-01 09:52:34.305210114 +0000 UTC m=+0.641840955 container remove cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hawking, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:52:34 np0005634532 systemd[1]: libpod-conmon-cc8804c60cd637728374bb058b8e14eb707c6f0a64848b742bd588d7f0ee5f79.scope: Deactivated successfully.
Mar  1 04:52:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v295: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:52:34 np0005634532 python3.9[159805]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:34.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:34.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:34 np0005634532 podman[159960]: 2026-03-01 09:52:34.959568235 +0000 UTC m=+0.055925807 container create 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 04:52:35 np0005634532 systemd[1]: Started libpod-conmon-87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73.scope.
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:34.940929193 +0000 UTC m=+0.037286805 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:35.06478919 +0000 UTC m=+0.161146762 container init 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:35.071029568 +0000 UTC m=+0.167387160 container start 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:35.074807593 +0000 UTC m=+0.171165175 container attach 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:52:35 np0005634532 zen_khayyam[160010]: 167 167
Mar  1 04:52:35 np0005634532 systemd[1]: libpod-87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73.scope: Deactivated successfully.
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:35.079942803 +0000 UTC m=+0.176300445 container died 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:52:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:35 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e3f530312a2e8b43137b90d6fc85966de1323b9ffd2306f8655effd2bfc5372d-merged.mount: Deactivated successfully.
Mar  1 04:52:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:35 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:35 np0005634532 podman[159960]: 2026-03-01 09:52:35.120820939 +0000 UTC m=+0.217178491 container remove 87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_khayyam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:52:35 np0005634532 systemd[1]: libpod-conmon-87a38cd303d5d3814c4866a7a31148e103dc1374b1c5b1286cb8eb88acb35b73.scope: Deactivated successfully.
Mar  1 04:52:35 np0005634532 python3.9[160043]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358753.9296875-213-156775132136771/.source follow=False _original_basename=haproxy.j2 checksum=36eb690540aa0772ca6567179203c91771cb80db backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.285585631 +0000 UTC m=+0.053706211 container create a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 04:52:35 np0005634532 systemd[1]: Started libpod-conmon-a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19.scope.
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.262698471 +0000 UTC m=+0.030819071 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f62c4c9c585f50e22555db04ea581b77835d484c6f1c599f062ebcb5292bbf4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f62c4c9c585f50e22555db04ea581b77835d484c6f1c599f062ebcb5292bbf4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f62c4c9c585f50e22555db04ea581b77835d484c6f1c599f062ebcb5292bbf4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f62c4c9c585f50e22555db04ea581b77835d484c6f1c599f062ebcb5292bbf4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.405429606 +0000 UTC m=+0.173550206 container init a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.418843806 +0000 UTC m=+0.186964386 container start a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.422840257 +0000 UTC m=+0.190960877 container attach a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:52:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:35 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]: {
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:    "0": [
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:        {
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "devices": [
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "/dev/loop3"
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            ],
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "lv_name": "ceph_lv0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "lv_size": "21470642176",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "name": "ceph_lv0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "tags": {
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.cluster_name": "ceph",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.crush_device_class": "",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.encrypted": "0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.osd_id": "0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.type": "block",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.vdo": "0",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:                "ceph.with_tpm": "0"
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            },
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "type": "block",
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:            "vg_name": "ceph_vg0"
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:        }
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]:    ]
Mar  1 04:52:35 np0005634532 trusting_faraday[160084]: }
Mar  1 04:52:35 np0005634532 systemd[1]: libpod-a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19.scope: Deactivated successfully.
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.800664995 +0000 UTC m=+0.568785595 container died a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:52:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6f62c4c9c585f50e22555db04ea581b77835d484c6f1c599f062ebcb5292bbf4-merged.mount: Deactivated successfully.
Mar  1 04:52:35 np0005634532 podman[160060]: 2026-03-01 09:52:35.851924053 +0000 UTC m=+0.620044653 container remove a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_faraday, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:52:35 np0005634532 systemd[1]: libpod-conmon-a473b9f33f96aa3511386b42bc2fcc10ff034369d2b966b7439a70e0d46fdf19.scope: Deactivated successfully.
Mar  1 04:52:36 np0005634532 python3.9[160270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.458672959 +0000 UTC m=+0.047198726 container create d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 04:52:36 np0005634532 systemd[1]: Started libpod-conmon-d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae.scope.
Mar  1 04:52:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v296: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.43738798 +0000 UTC m=+0.025913797 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.551960472 +0000 UTC m=+0.140486239 container init d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.558809395 +0000 UTC m=+0.147335132 container start d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.562262622 +0000 UTC m=+0.150788349 container attach d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:52:36 np0005634532 epic_poincare[160476]: 167 167
Mar  1 04:52:36 np0005634532 systemd[1]: libpod-d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae.scope: Deactivated successfully.
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.570691006 +0000 UTC m=+0.159216743 container died d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 04:52:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9ace798b202dbc84cd20221e25dfcc623ce23f72652a878dab96a95c1e7ab206-merged.mount: Deactivated successfully.
Mar  1 04:52:36 np0005634532 podman[160413]: 2026-03-01 09:52:36.619103362 +0000 UTC m=+0.207629099 container remove d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=epic_poincare, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:52:36 np0005634532 systemd[1]: libpod-conmon-d7559f7763c1672aaf5833285a05d2129f28f082ff6f29bf29a39d185a08eaae.scope: Deactivated successfully.
Mar  1 04:52:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:36.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:36 np0005634532 python3.9[160475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358755.477173-258-105501913257541/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:36.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:36 np0005634532 podman[160513]: 2026-03-01 09:52:36.821047786 +0000 UTC m=+0.058553944 container create 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:52:36 np0005634532 systemd[1]: Started libpod-conmon-204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b.scope.
Mar  1 04:52:36 np0005634532 podman[160513]: 2026-03-01 09:52:36.804983589 +0000 UTC m=+0.042489767 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:52:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:52:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f0f5a982b7d473272f8ea24b8cf9d3c25bd4e933cef31ba96c51425e2d9582/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f0f5a982b7d473272f8ea24b8cf9d3c25bd4e933cef31ba96c51425e2d9582/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f0f5a982b7d473272f8ea24b8cf9d3c25bd4e933cef31ba96c51425e2d9582/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8f0f5a982b7d473272f8ea24b8cf9d3c25bd4e933cef31ba96c51425e2d9582/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:52:36 np0005634532 podman[160513]: 2026-03-01 09:52:36.94362869 +0000 UTC m=+0.181134888 container init 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:52:36 np0005634532 podman[160513]: 2026-03-01 09:52:36.953381737 +0000 UTC m=+0.190887905 container start 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:52:36 np0005634532 podman[160513]: 2026-03-01 09:52:36.957599544 +0000 UTC m=+0.195105742 container attach 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:37.007Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:37] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:52:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:37] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:37 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:37 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:37 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:52:37 np0005634532 python3.9[160692]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:52:37 np0005634532 lvm[160747]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:52:37 np0005634532 lvm[160747]: VG ceph_vg0 finished
Mar  1 04:52:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:37 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:37 np0005634532 admiring_hodgkin[160542]: {}
Mar  1 04:52:37 np0005634532 systemd[1]: libpod-204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b.scope: Deactivated successfully.
Mar  1 04:52:37 np0005634532 systemd[1]: libpod-204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b.scope: Consumed 1.114s CPU time.
Mar  1 04:52:37 np0005634532 podman[160513]: 2026-03-01 09:52:37.794480386 +0000 UTC m=+1.031986584 container died 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 04:52:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a8f0f5a982b7d473272f8ea24b8cf9d3c25bd4e933cef31ba96c51425e2d9582-merged.mount: Deactivated successfully.
Mar  1 04:52:37 np0005634532 podman[160513]: 2026-03-01 09:52:37.847446158 +0000 UTC m=+1.084952316 container remove 204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:52:37 np0005634532 systemd[1]: libpod-conmon-204f114500128cfda67b38f0bb8345b0c04cc746c04723bc6d328c9b36faf05b.scope: Deactivated successfully.
Mar  1 04:52:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:52:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:52:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:38 np0005634532 python3.9[160872]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:52:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v297: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:52:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:38.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:38.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:38 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:38 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:52:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:39 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:39 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:39 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:40 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:52:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:40 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:52:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v298: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:52:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:40.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:40.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:40 np0005634532 python3.9[161028]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:52:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:41 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:41 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:41 np0005634532 python3.9[161181]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:41 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c0004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:42 np0005634532 python3.9[161303]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358761.1508064-369-148084883498887/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v299: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 596 B/s wr, 1 op/s
Mar  1 04:52:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:42.655 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:42.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:42 np0005634532 python3.9[161454]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:43 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:43 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:43 np0005634532 python3.9[161575]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358762.3019679-369-140613392926441/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:43 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:52:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:43 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v300: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:52:44 np0005634532 python3.9[161727]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:44.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:44.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:45 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:45 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:45 np0005634532 python3.9[161848]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358764.1232138-501-33951565160461/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:45 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01d40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:45 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:45Z|00025|memory|INFO|16256 kB peak resident set size after 30.0 seconds
Mar  1 04:52:45 np0005634532 ovn_controller[157082]: 2026-03-01T09:52:45Z|00026|memory|INFO|idl-cells-OVN_Southbound:273 idl-cells-Open_vSwitch:642 ofctrl_desired_flow_usage-KB:7 ofctrl_installed_flow_usage-KB:5 ofctrl_sb_flow_ref_usage-KB:3
Mar  1 04:52:46 np0005634532 podman[161973]: 2026-03-01 09:52:46.016052689 +0000 UTC m=+0.103339868 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Mar  1 04:52:46 np0005634532 python3.9[162010]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v301: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:52:46 np0005634532 python3.9[162148]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358765.6447754-501-202009961120932/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:46.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:46.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:47.010Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:47.011Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:47] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:47] "GET /metrics HTTP/1.1" 200 48336 "" "Prometheus/2.51.0"
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:47 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:47 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:52:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:52:47 np0005634532 python3.9[162301]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:52:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:52:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:47 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01cc000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:48 np0005634532 python3.9[162457]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v302: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:52:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:48.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:48.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:48 np0005634532 python3.9[162613]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:49 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:49 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01c4004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:49 np0005634532 python3.9[162692]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:49 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b8003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:50 np0005634532 python3.9[162846]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095250 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:52:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v303: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 4 op/s
Mar  1 04:52:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:50.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:50.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:50 np0005634532 python3.9[162926]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:51 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01cc001930 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:52:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[155850]: 01/03/2026 09:52:51 : epoch 69a40c42 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f01b00016a0 fd 38 proxy ignored for local
Mar  1 04:52:51 np0005634532 kernel: ganesha.nfsd[162197]: segfault at 50 ip 00007f0262fb932e sp 00007f01e8ff8210 error 4 in libntirpc.so.5.8[7f0262f9e000+2c000] likely on CPU 1 (core 0, socket 1)
Mar  1 04:52:51 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:52:51 np0005634532 systemd[1]: Started Process Core Dump (PID 163026/UID 0).
Mar  1 04:52:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:51 np0005634532 python3.9[163083]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:52 np0005634532 python3.9[163262]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:52 np0005634532 systemd-coredump[163029]: Process 155860 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 56:#012#0  0x00007f0262fb932e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 04:52:52 np0005634532 systemd[1]: systemd-coredump@4-163026-0.service: Deactivated successfully.
Mar  1 04:52:52 np0005634532 systemd[1]: systemd-coredump@4-163026-0.service: Consumed 1.077s CPU time.
Mar  1 04:52:52 np0005634532 podman[163344]: 2026-03-01 09:52:52.386709058 +0000 UTC m=+0.039913402 container died 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:52:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay-90646d1a555101af477d779c00a7816a5ccf58df0c82fe6e5b2fc3fae8546d53-merged.mount: Deactivated successfully.
Mar  1 04:52:52 np0005634532 podman[163344]: 2026-03-01 09:52:52.437290709 +0000 UTC m=+0.090495023 container remove 01f02cbed471c0aaef3d106da3a0f820751a86804122aba9a7e5ca63b2a6be59 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:52:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:52:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v304: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:52:52 np0005634532 python3.9[163352]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:52:52 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.323s CPU time.
Mar  1 04:52:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:52.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:52.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:53 np0005634532 python3.9[163542]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:53 np0005634532 python3.9[163621]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v305: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:52:54 np0005634532 python3.9[163776]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:52:54 np0005634532 systemd[1]: Reloading.
Mar  1 04:52:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:52:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:54.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:52:54 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:52:54 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:52:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:54.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:55 np0005634532 python3.9[163974]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:56 np0005634532 python3.9[164054]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:52:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v306: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:52:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:56.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:52:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:56.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:52:56 np0005634532 python3.9[164208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:52:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:52:57.012Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:52:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:57] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:52:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:52:57] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:52:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095257 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:52:57 np0005634532 python3.9[164287]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:52:57 np0005634532 python3.9[164440]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:52:57 np0005634532 systemd[1]: Reloading.
Mar  1 04:52:57 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:52:57 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:52:58 np0005634532 systemd[1]: Starting Create netns directory...
Mar  1 04:52:58 np0005634532 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Mar  1 04:52:58 np0005634532 systemd[1]: netns-placeholder.service: Deactivated successfully.
Mar  1 04:52:58 np0005634532 systemd[1]: Finished Create netns directory.
Mar  1 04:52:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v307: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:52:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:52:58.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:52:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:52:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:52:58.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:52:59 np0005634532 python3.9[164643]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:52:59 np0005634532 python3.9[164796]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:53:00 np0005634532 python3.9[164921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772358779.277211-954-271265655349056/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:53:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v308: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:53:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:53:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:00.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:53:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:00.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:01 np0005634532 python3.9[165075]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:01 np0005634532 python3.9[165228]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:53:02 np0005634532 python3.9[165383]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:53:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:53:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:53:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v309: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:53:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:02.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:02 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 5.
Mar  1 04:53:02 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:53:02 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.323s CPU time.
Mar  1 04:53:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:02.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:02 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:53:02 np0005634532 python3.9[165527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358781.9650655-1053-247056909055087/.source.json _original_basename=.5s8j1qia follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:02 np0005634532 podman[165557]: 2026-03-01 09:53:02.971183747 +0000 UTC m=+0.066774402 container create 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:53:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bfcf1cb06b54a83fd11c083dafdcf58f2006052af79c932d200b985699df16/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bfcf1cb06b54a83fd11c083dafdcf58f2006052af79c932d200b985699df16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bfcf1cb06b54a83fd11c083dafdcf58f2006052af79c932d200b985699df16/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57bfcf1cb06b54a83fd11c083dafdcf58f2006052af79c932d200b985699df16/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:03 np0005634532 podman[165557]: 2026-03-01 09:53:03.033392533 +0000 UTC m=+0.128982948 container init 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:53:03 np0005634532 podman[165557]: 2026-03-01 09:53:02.942632954 +0000 UTC m=+0.038223439 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:03 np0005634532 podman[165557]: 2026-03-01 09:53:03.046519235 +0000 UTC m=+0.142109630 container start 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 04:53:03 np0005634532 bash[165557]: 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc
Mar  1 04:53:03 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:53:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:03 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:03 np0005634532 python3.9[165764]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v310: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:53:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:04.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:53:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:04.721 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:53:05 np0005634532 python3.9[166190]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Mar  1 04:53:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v311: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:53:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:06.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:53:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:06.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:53:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:07.015Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:53:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:07] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:53:07 np0005634532 python3.9[166345]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Mar  1 04:53:08 np0005634532 python3[166500]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Mar  1 04:53:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v312: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:08.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:08.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:09 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:09 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:09 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:53:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v313: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:10.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:10.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v314: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:12.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:12.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095312 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:53:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:13 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:13 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:13 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:14 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:53:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v315: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:14.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:53:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:14.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:53:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v316: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Mar  1 04:53:16 np0005634532 podman[166513]: 2026-03-01 09:53:16.557832801 +0000 UTC m=+8.101889920 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 04:53:16 np0005634532 podman[166648]: 2026-03-01 09:53:16.630460751 +0000 UTC m=+0.304084071 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Mar  1 04:53:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:16.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:16.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:17.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:17] "GET /metrics HTTP/1.1" 200 48333 "" "Prometheus/2.51.0"
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:53:17
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', '.nfs', 'backups', 'default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'images', 'vms', '.rgw.root']
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:53:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:53:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:53:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:17 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:17 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:17 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:53:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:53:17 np0005634532 podman[166696]: 2026-03-01 09:53:17.751887229 +0000 UTC m=+0.060648237 container create 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_managed=true, org.label-schema.license=GPLv2)
Mar  1 04:53:17 np0005634532 podman[166696]: 2026-03-01 09:53:17.723379967 +0000 UTC m=+0.032140985 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 04:53:17 np0005634532 python3[166500]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 04:53:18 np0005634532 python3.9[166888]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:53:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v317: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Mar  1 04:53:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:18.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:18.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:19 np0005634532 python3.9[167043]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:19 np0005634532 python3.9[167120]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:53:20 np0005634532 python3.9[167273]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772358799.6542904-1287-214536165792674/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v318: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 1 op/s
Mar  1 04:53:20 np0005634532 python3.9[167351]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:53:20 np0005634532 systemd[1]: Reloading.
Mar  1 04:53:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:20.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:20 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:53:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:20.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:20 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:53:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:21 np0005634532 python3.9[167471]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:21 np0005634532 systemd[1]: Reloading.
Mar  1 04:53:21 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:53:21 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:53:21 np0005634532 systemd[1]: Starting ovn_metadata_agent container...
Mar  1 04:53:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f856fbe62da9a9604d4c8800876f772b8ab72496482a775d6ca4ce420567b8/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/54f856fbe62da9a9604d4c8800876f772b8ab72496482a775d6ca4ce420567b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:22 np0005634532 systemd[1]: Started /usr/bin/podman healthcheck run 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5.
Mar  1 04:53:22 np0005634532 podman[167520]: 2026-03-01 09:53:22.009164464 +0000 UTC m=+0.109928162 container init 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + sudo -E kolla_set_configs
Mar  1 04:53:22 np0005634532 podman[167520]: 2026-03-01 09:53:22.043252888 +0000 UTC m=+0.144016566 container start 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:53:22 np0005634532 edpm-start-podman-container[167520]: ovn_metadata_agent
Mar  1 04:53:22 np0005634532 edpm-start-podman-container[167519]: Creating additional drop-in dependency for "ovn_metadata_agent" (1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5)
Mar  1 04:53:22 np0005634532 podman[167543]: 2026-03-01 09:53:22.099971512 +0000 UTC m=+0.047790434 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Validating config file
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Copying service configuration files
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Writing out command to execute
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/external
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: ++ cat /run_command
Mar  1 04:53:22 np0005634532 systemd[1]: Reloading.
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + CMD=neutron-ovn-metadata-agent
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + ARGS=
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + sudo kolla_copy_cacerts
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + [[ ! -n '' ]]
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + . kolla_extend_start
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: Running command: 'neutron-ovn-metadata-agent'
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + umask 0022
Mar  1 04:53:22 np0005634532 ovn_metadata_agent[167536]: + exec neutron-ovn-metadata-agent
Mar  1 04:53:22 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:53:22 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:53:22 np0005634532 systemd[1]: Started ovn_metadata_agent container.
Mar  1 04:53:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v319: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 341 B/s wr, 1 op/s
Mar  1 04:53:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:53:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:22.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:53:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:22.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:23 np0005634532 python3.9[167783]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.832 167541 INFO neutron.common.config [-] Logging enabled!#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.832 167541 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev44#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.832 167541 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.833 167541 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.834 167541 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.835 167541 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.836 167541 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.837 167541 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.838 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.839 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.840 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.841 167541 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.842 167541 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.843 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.844 167541 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.845 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.846 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.847 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.848 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.849 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.850 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.851 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.852 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.853 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.854 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.855 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.856 167541 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.857 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.858 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.859 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.860 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.861 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.862 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.863 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.864 167541 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.864 167541 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.872 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.872 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.872 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.873 167541 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.873 167541 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.885 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 90b7dc66-b984-4d8b-9541-ddde79c5f544 (UUID: 90b7dc66-b984-4d8b-9541-ddde79c5f544) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.905 167541 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.906 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.906 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.906 167541 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.909 167541 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.915 167541 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.921 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '90b7dc66-b984-4d8b-9541-ddde79c5f544'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], external_ids={}, name=90b7dc66-b984-4d8b-9541-ddde79c5f544, nb_cfg_timestamp=1772358744040, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.922 167541 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f611def4d90>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.923 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.923 167541 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.923 167541 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.924 167541 INFO oslo_service.service [-] Starting 1 workers#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.929 167541 DEBUG oslo_service.service [-] Started child 167809 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.932 167541 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpx_1ooqui/privsep.sock']#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.934 167809 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-4173650'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.963 167809 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.964 167809 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.964 167809 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.967 167809 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.974 167809 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Mar  1 04:53:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:23.983 167809 INFO eventlet.wsgi.server [-] (167809) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Mar  1 04:53:24 np0005634532 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:53:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:24 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v320: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 2 op/s
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.580 167541 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.581 167541 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpx_1ooqui/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.446 167914 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.450 167914 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.452 167914 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.452 167914 INFO oslo.privsep.daemon [-] privsep daemon running as pid 167914#033[00m
Mar  1 04:53:24 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:24.583 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[7242216f-a526-4841-961b-901183dcdd07]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 04:53:24 np0005634532 python3.9[167956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:53:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:24.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:24.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.106 167914 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.107 167914 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.107 167914 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:53:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:25 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa650000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:25 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa648001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:25 np0005634532 python3.9[168086]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772358804.23773-1422-117820805181067/.source.yaml _original_basename=.hxge1jyb follow=False checksum=6266192c3d50a6d55d5e57f79016b2557e4b4ddf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.611 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[18cd78d3-00a3-420f-a87b-2ada0d047fac]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.613 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, column=external_ids, values=({'neutron:ovn-metadata-id': '5b1a1828-0b03-549f-beac-6818a0fd7e8a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.619 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.625 167541 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.625 167541 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.626 167541 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.627 167541 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.628 167541 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.628 167541 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.628 167541 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.628 167541 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.629 167541 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.630 167541 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.630 167541 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.630 167541 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.630 167541 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.631 167541 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.632 167541 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.632 167541 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.632 167541 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.632 167541 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.632 167541 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.633 167541 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.633 167541 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.633 167541 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.633 167541 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.633 167541 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.634 167541 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.635 167541 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.636 167541 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.637 167541 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.638 167541 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.639 167541 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.640 167541 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.641 167541 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.641 167541 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.641 167541 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.641 167541 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.641 167541 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.642 167541 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.643 167541 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.644 167541 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.645 167541 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.646 167541 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.647 167541 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.648 167541 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.649 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.650 167541 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.651 167541 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.652 167541 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.653 167541 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.654 167541 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.655 167541 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.656 167541 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.657 167541 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.658 167541 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.659 167541 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.659 167541 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.659 167541 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.659 167541 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.660 167541 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.661 167541 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.662 167541 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.663 167541 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.664 167541 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.665 167541 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.666 167541 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.667 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.668 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.669 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.670 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.671 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 04:53:25 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:53:25.672 167541 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Mar  1 04:53:25 np0005634532 systemd[1]: session-52.scope: Deactivated successfully.
Mar  1 04:53:25 np0005634532 systemd[1]: session-52.scope: Consumed 53.365s CPU time.
Mar  1 04:53:25 np0005634532 systemd-logind[832]: Session 52 logged out. Waiting for processes to exit.
Mar  1 04:53:25 np0005634532 systemd-logind[832]: Removed session 52.
Mar  1 04:53:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:25 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa648001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v321: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 1.6 KiB/s rd, 767 B/s wr, 2 op/s
Mar  1 04:53:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:26.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:26.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:27.016Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:53:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:27] "GET /metrics HTTP/1.1" 200 48335 "" "Prometheus/2.51.0"
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095327 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:27 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa648001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:27 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa650001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:27 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:27 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:27 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa62c000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v322: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Mar  1 04:53:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:28.723 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:28.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:29 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa624000d00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:29 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa648001c00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:29 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa650001bd0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:53:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v323: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:53:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:30 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:53:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:30.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:30.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[165573]: 01/03/2026 09:53:31 : epoch 69a40c7f : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fa62c001820 fd 42 proxy ignored for local
Mar  1 04:53:31 np0005634532 kernel: ganesha.nfsd[168089]: segfault at 50 ip 00007fa6d2c4f32e sp 00007fa633ffe210 error 4 in libntirpc.so.5.8[7fa6d2c34000+2c000] likely on CPU 1 (core 0, socket 1)
Mar  1 04:53:31 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:53:31 np0005634532 systemd[1]: Started Process Core Dump (PID 168125/UID 0).
Mar  1 04:53:31 np0005634532 systemd-logind[832]: New session 53 of user zuul.
Mar  1 04:53:31 np0005634532 systemd[1]: Started Session 53 of User zuul.
Mar  1 04:53:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:32 np0005634532 systemd-coredump[168127]: Process 165598 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 54:#012#0  0x00007fa6d2c4f32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007fa6d2c59900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Mar  1 04:53:32 np0005634532 python3.9[168304]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:53:32 np0005634532 systemd[1]: systemd-coredump@5-168125-0.service: Deactivated successfully.
Mar  1 04:53:32 np0005634532 podman[168314]: 2026-03-01 09:53:32.274038068 +0000 UTC m=+0.021554484 container died 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 04:53:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:53:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:53:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v324: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:53:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay-57bfcf1cb06b54a83fd11c083dafdcf58f2006052af79c932d200b985699df16-merged.mount: Deactivated successfully.
Mar  1 04:53:32 np0005634532 podman[168314]: 2026-03-01 09:53:32.621035177 +0000 UTC m=+0.368551553 container remove 1d2687769490e20df80e4faed0ec3c514a9fc5d3b8fffa390b2dea3d3c57befc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:53:32 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:53:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:32.729 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:32.746 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095332 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:53:32 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:53:32 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.120s CPU time.
Mar  1 04:53:33 np0005634532 python3.9[168510]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000000 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000006 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Mar  1 04:53:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Mar  1 04:53:34 np0005634532 python3.9[168678]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:53:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v325: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 4.4 KiB/s rd, 1023 B/s wr, 7 op/s
Mar  1 04:53:34 np0005634532 systemd[1]: Reloading.
Mar  1 04:53:34 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:53:34 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:53:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:34.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:34.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:35 np0005634532 python3.9[168871]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:53:35 np0005634532 network[168888]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:53:35 np0005634532 network[168889]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:53:35 np0005634532 network[168890]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:53:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v326: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 597 B/s wr, 6 op/s
Mar  1 04:53:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:36.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:36.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:37.017Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:37] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:53:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:37] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:53:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095337 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:53:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v327: 353 pgs: 353 active+clean; 458 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 597 B/s wr, 45 op/s
Mar  1 04:53:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:38.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:38.754 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:53:39 np0005634532 auditd[719]: Audit daemon rotating log files
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.700694189 +0000 UTC m=+0.052492010 container create 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:53:39 np0005634532 systemd[1]: Started libpod-conmon-680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b.scope.
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.678279085 +0000 UTC m=+0.030076916 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:39 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.792480031 +0000 UTC m=+0.144277882 container init 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.800827608 +0000 UTC m=+0.152625429 container start 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.805041242 +0000 UTC m=+0.156839063 container attach 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:53:39 np0005634532 hopeful_boyd[169335]: 167 167
Mar  1 04:53:39 np0005634532 systemd[1]: libpod-680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b.scope: Deactivated successfully.
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.811791939 +0000 UTC m=+0.163589760 container died 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:39 np0005634532 systemd[1]: var-lib-containers-storage-overlay-64fcda9e20d90db15106f064395f647c75eb950020ad43e6ef67acee041581d5-merged.mount: Deactivated successfully.
Mar  1 04:53:39 np0005634532 podman[169279]: 2026-03-01 09:53:39.854384544 +0000 UTC m=+0.206182335 container remove 680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_boyd, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 04:53:39 np0005634532 systemd[1]: libpod-conmon-680fe0b9ae27a760bc424937f6c45b9de3158beea12c41eb5df2834de6fbb64b.scope: Deactivated successfully.
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.010471597 +0000 UTC m=+0.057761770 container create 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:53:40 np0005634532 systemd[1]: Started libpod-conmon-3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a.scope.
Mar  1 04:53:40 np0005634532 python3.9[169351]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:39.987298534 +0000 UTC m=+0.034588697 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:40 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.126152261 +0000 UTC m=+0.173442424 container init 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.133499093 +0000 UTC m=+0.180789226 container start 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.137081031 +0000 UTC m=+0.184371164 container attach 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:40 np0005634532 objective_proskuriakova[169390]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:53:40 np0005634532 objective_proskuriakova[169390]: --> All data devices are unavailable
Mar  1 04:53:40 np0005634532 systemd[1]: libpod-3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a.scope: Deactivated successfully.
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.49252377 +0000 UTC m=+0.539813903 container died 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:53:40 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c1b6547ea510a7b66ea2a5e0ecf1d514258a512acc29f1cbc0bcb8ba75c390c3-merged.mount: Deactivated successfully.
Mar  1 04:53:40 np0005634532 podman[169374]: 2026-03-01 09:53:40.52889477 +0000 UTC m=+0.576184903 container remove 3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:53:40 np0005634532 systemd[1]: libpod-conmon-3f4c6913fe127bd424ea3831fb9d5830d581b61a208016a2ac6dfc73c8be9e3a.scope: Deactivated successfully.
Mar  1 04:53:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v328: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 85 B/s wr, 142 op/s
Mar  1 04:53:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:40.742 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:40.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:40 np0005634532 python3.9[169570]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.043611721 +0000 UTC m=+0.038265298 container create 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 04:53:41 np0005634532 systemd[1]: Started libpod-conmon-80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c.scope.
Mar  1 04:53:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.023402271 +0000 UTC m=+0.018055868 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.127152819 +0000 UTC m=+0.121806416 container init 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.133512506 +0000 UTC m=+0.128166083 container start 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.136842499 +0000 UTC m=+0.131496076 container attach 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:41 np0005634532 silly_haslett[169779]: 167 167
Mar  1 04:53:41 np0005634532 systemd[1]: libpod-80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c.scope: Deactivated successfully.
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.139099574 +0000 UTC m=+0.133753161 container died 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Mar  1 04:53:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a5a056e967a26912603d5e53dd53b818612d2593dd37ff8d5954a7bc50314e29-merged.mount: Deactivated successfully.
Mar  1 04:53:41 np0005634532 podman[169739]: 2026-03-01 09:53:41.175151417 +0000 UTC m=+0.169805004 container remove 80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:53:41 np0005634532 systemd[1]: libpod-conmon-80fdfaa59f454e5932d0a3eed71424855a64e60d9687ed4f80409df418425e0c.scope: Deactivated successfully.
Mar  1 04:53:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.321616632 +0000 UTC m=+0.051671770 container create f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:41 np0005634532 systemd[1]: Started libpod-conmon-f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe.scope.
Mar  1 04:53:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.300269914 +0000 UTC m=+0.030325072 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea949fc68914a10318c104969da5dd2e8d69a0d5f30e91ab56dd13889624007/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea949fc68914a10318c104969da5dd2e8d69a0d5f30e91ab56dd13889624007/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea949fc68914a10318c104969da5dd2e8d69a0d5f30e91ab56dd13889624007/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ea949fc68914a10318c104969da5dd2e8d69a0d5f30e91ab56dd13889624007/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.417822393 +0000 UTC m=+0.147877551 container init f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.423902843 +0000 UTC m=+0.153957971 container start f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default)
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.442821221 +0000 UTC m=+0.172876379 container attach f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 04:53:41 np0005634532 python3.9[169854]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]: {
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:    "0": [
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:        {
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "devices": [
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "/dev/loop3"
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            ],
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "lv_name": "ceph_lv0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "lv_size": "21470642176",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "name": "ceph_lv0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "tags": {
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.cluster_name": "ceph",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.crush_device_class": "",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.encrypted": "0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.osd_id": "0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.type": "block",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.vdo": "0",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:                "ceph.with_tpm": "0"
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            },
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "type": "block",
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:            "vg_name": "ceph_vg0"
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:        }
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]:    ]
Mar  1 04:53:41 np0005634532 lucid_bardeen[169878]: }
Mar  1 04:53:41 np0005634532 systemd[1]: libpod-f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe.scope: Deactivated successfully.
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.70737479 +0000 UTC m=+0.437429928 container died f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:53:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9ea949fc68914a10318c104969da5dd2e8d69a0d5f30e91ab56dd13889624007-merged.mount: Deactivated successfully.
Mar  1 04:53:41 np0005634532 podman[169860]: 2026-03-01 09:53:41.752730443 +0000 UTC m=+0.482785581 container remove f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=lucid_bardeen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:53:41 np0005634532 systemd[1]: libpod-conmon-f7b290867d6dc9a44a117026482d2d704ba319310bc1ed33bfd02d0312582dfe.scope: Deactivated successfully.
Mar  1 04:53:42 np0005634532 python3.9[170101]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.225084935 +0000 UTC m=+0.035866349 container create f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:53:42 np0005634532 systemd[1]: Started libpod-conmon-f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7.scope.
Mar  1 04:53:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.209175951 +0000 UTC m=+0.019957375 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.30487077 +0000 UTC m=+0.115652174 container init f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.309950856 +0000 UTC m=+0.120732260 container start f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1)
Mar  1 04:53:42 np0005634532 peaceful_dhawan[170180]: 167 167
Mar  1 04:53:42 np0005634532 systemd[1]: libpod-f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7.scope: Deactivated successfully.
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.313551605 +0000 UTC m=+0.124333009 container attach f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.315432091 +0000 UTC m=+0.126213495 container died f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:53:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8c2bcae95d6de9337a1397b7ea5f2ddbe0ca0441e99ae319b770070e149f5138-merged.mount: Deactivated successfully.
Mar  1 04:53:42 np0005634532 podman[170146]: 2026-03-01 09:53:42.355858332 +0000 UTC m=+0.166639786 container remove f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_dhawan, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:53:42 np0005634532 systemd[1]: libpod-conmon-f5f43aa7cd6b9e08694238493640c665d11398740976ca22b9ee0a9c0951d6b7.scope: Deactivated successfully.
Mar  1 04:53:42 np0005634532 podman[170286]: 2026-03-01 09:53:42.481266406 +0000 UTC m=+0.040392631 container create 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 04:53:42 np0005634532 systemd[1]: Started libpod-conmon-18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8.scope.
Mar  1 04:53:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:53:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v329: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 KiB/s rd, 0 B/s wr, 142 op/s
Mar  1 04:53:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f7fb89dbbf30a788f381a9ac2d5f647460de24405243fc3a860ed9f3473ef0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f7fb89dbbf30a788f381a9ac2d5f647460de24405243fc3a860ed9f3473ef0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f7fb89dbbf30a788f381a9ac2d5f647460de24405243fc3a860ed9f3473ef0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/48f7fb89dbbf30a788f381a9ac2d5f647460de24405243fc3a860ed9f3473ef0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:42 np0005634532 podman[170286]: 2026-03-01 09:53:42.462890561 +0000 UTC m=+0.022016806 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:42 np0005634532 podman[170286]: 2026-03-01 09:53:42.559581705 +0000 UTC m=+0.118707950 container init 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Mar  1 04:53:42 np0005634532 podman[170286]: 2026-03-01 09:53:42.568222239 +0000 UTC m=+0.127348464 container start 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:53:42 np0005634532 podman[170286]: 2026-03-01 09:53:42.571668104 +0000 UTC m=+0.130794349 container attach 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:53:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:42.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:53:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:42.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:53:42 np0005634532 python3.9[170361]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:42 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 6.
Mar  1 04:53:42 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:53:42 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.120s CPU time.
Mar  1 04:53:42 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:53:43 np0005634532 lvm[170522]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:53:43 np0005634532 lvm[170522]: VG ceph_vg0 finished
Mar  1 04:53:43 np0005634532 podman[170523]: 2026-03-01 09:53:43.090174519 +0000 UTC m=+0.034534806 container create 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:43 np0005634532 vigilant_ishizaka[170326]: {}
Mar  1 04:53:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d72e5b884de398cf35c05c1130f85712df884ecccb01c9f7574fc3bf53d688/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d72e5b884de398cf35c05c1130f85712df884ecccb01c9f7574fc3bf53d688/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d72e5b884de398cf35c05c1130f85712df884ecccb01c9f7574fc3bf53d688/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0d72e5b884de398cf35c05c1130f85712df884ecccb01c9f7574fc3bf53d688/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:53:43 np0005634532 podman[170523]: 2026-03-01 09:53:43.131810079 +0000 UTC m=+0.076170386 container init 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 04:53:43 np0005634532 systemd[1]: libpod-18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8.scope: Deactivated successfully.
Mar  1 04:53:43 np0005634532 podman[170286]: 2026-03-01 09:53:43.137133311 +0000 UTC m=+0.696259536 container died 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:43 np0005634532 podman[170523]: 2026-03-01 09:53:43.138355271 +0000 UTC m=+0.082715538 container start 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:53:43 np0005634532 bash[170523]: 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b
Mar  1 04:53:43 np0005634532 podman[170523]: 2026-03-01 09:53:43.074259395 +0000 UTC m=+0.018619692 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:53:43 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:53:43 np0005634532 systemd[1]: var-lib-containers-storage-overlay-48f7fb89dbbf30a788f381a9ac2d5f647460de24405243fc3a860ed9f3473ef0-merged.mount: Deactivated successfully.
Mar  1 04:53:43 np0005634532 podman[170286]: 2026-03-01 09:53:43.185417546 +0000 UTC m=+0.744543771 container remove 18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_ishizaka, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:53:43 np0005634532 systemd[1]: libpod-conmon-18f8a403f922234ef457a70445bc207f062e87a50e6798dc20013a4941a308c8.scope: Deactivated successfully.
Mar  1 04:53:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:53:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:53:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:43 np0005634532 python3.9[170698]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:44 np0005634532 python3.9[170878]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:53:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:53:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v330: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 86 KiB/s rd, 85 B/s wr, 142 op/s
Mar  1 04:53:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:44.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:44.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v331: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 83 KiB/s rd, 85 B/s wr, 138 op/s
Mar  1 04:53:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:46.750 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:46.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:46 np0005634532 python3.9[171037]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:46 np0005634532 podman[171038]: 2026-03-01 09:53:46.949608921 +0000 UTC m=+0.096818838 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Mar  1 04:53:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:47.019Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:47] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:47] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:53:47 np0005634532 python3.9[171216]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:53:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:53:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:53:47 np0005634532 python3.9[171370]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095348 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:53:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v332: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 84 KiB/s rd, 426 B/s wr, 139 op/s
Mar  1 04:53:48 np0005634532 python3.9[171524]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:48.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:48.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:49 np0005634532 python3.9[171677]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:49 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:49 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:49 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 04:53:49 np0005634532 python3.9[171830]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:50 np0005634532 python3.9[171985]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v333: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 426 B/s wr, 100 op/s
Mar  1 04:53:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:50.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:53:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:50.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:53:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:51 np0005634532 python3.9[172138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:51 np0005634532 python3.9[172291]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:52 np0005634532 podman[172442]: 2026-03-01 09:53:52.203659003 +0000 UTC m=+0.052878860 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 04:53:52 np0005634532 python3.9[172481]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v334: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:52.760 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:52.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:52 np0005634532 python3.9[172644]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:53 np0005634532 python3.9[172797]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:53 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:53:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:53 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:53:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:53 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:53:54 np0005634532 python3.9[172951]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v335: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:53:54 np0005634532 python3.9[173105]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:53:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:54.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:55 np0005634532 python3.9[173258]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:53:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:53:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v336: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 938 B/s rd, 341 B/s wr, 1 op/s
Mar  1 04:53:56 np0005634532 python3.9[173412]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:53:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:56.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:53:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:56.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:53:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:53:57.020Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:53:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:57] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Mar  1 04:53:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:53:57] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Mar  1 04:53:57 np0005634532 python3.9[173565]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:53:57 np0005634532 systemd[1]: Reloading.
Mar  1 04:53:57 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:53:57 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:53:58 np0005634532 python3.9[173763]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:53:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v337: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:53:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:53:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee1c45d0 =====
Mar  1 04:53:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:53:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee1c45d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:53:58 np0005634532 radosgw[91037]: beast: 0x7f87ee1c45d0: 192.168.122.102 - anonymous [01/Mar/2026:09:53:58.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:53:59 np0005634532 python3.9[173917]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:53:59 np0005634532 python3.9[174071]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:53:59 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:54:00 np0005634532 python3.9[174226]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:54:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:00 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1110000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v338: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:54:00 np0005634532 python3.9[174393]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:54:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:00.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee1c45d0 =====
Mar  1 04:54:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee1c45d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:54:00 np0005634532 radosgw[91037]: beast: 0x7f87ee1c45d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:00.775 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:54:01 np0005634532 python3.9[174552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:54:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:01 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1104001ee0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:01 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:01 np0005634532 python3.9[174706]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:54:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:02 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:54:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:54:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v339: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 596 B/s wr, 2 op/s
Mar  1 04:54:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:02.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:02.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:03 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:54:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:03 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:54:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095403 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:54:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:03 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:03 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:03 np0005634532 python3.9[174862]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Mar  1 04:54:04 np0005634532 python3.9[175017]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:54:04 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:54:04 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:54:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:04 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v340: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Mar  1 04:54:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:04.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:04.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:04 np0005634532 python3.9[175178]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Mar  1 04:54:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:05 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:05 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:05 np0005634532 python3.9[175339]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:54:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:06 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:54:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:06 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v341: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 1.3 KiB/s wr, 4 op/s
Mar  1 04:54:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:06.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:06.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:06 np0005634532 python3.9[175426]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:54:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:07.021Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:54:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:07.022Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:07] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Mar  1 04:54:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:07] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Mar  1 04:54:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:07 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:07 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:08 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095408 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:54:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v342: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 1.3 KiB/s wr, 5 op/s
Mar  1 04:54:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:08.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:08.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:09 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:09 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:10 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e00016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v343: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:54:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:10.785 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:10.789 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:11 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:11 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:12 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v344: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:54:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:12.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:12.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:13 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:13 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:14 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec001fc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v345: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:54:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:14.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:14.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:15 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:15 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:16 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v346: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:54:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:16.791 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:16.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:17.023Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:17] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:17] "GET /metrics HTTP/1.1" 200 48347 "" "Prometheus/2.51.0"
Mar  1 04:54:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:17 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:17 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:17 np0005634532 podman[175647]: 2026-03-01 09:54:17.438831494 +0000 UTC m=+0.122362050 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:54:17
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', '.nfs', 'images', '.rgw.root', 'default.rgw.log', 'volumes', 'vms', 'default.rgw.control', 'backups', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta']
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:54:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:54:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:54:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:54:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:18 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v347: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:54:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:18.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:18.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:19 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:19 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec0032f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:20 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v348: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:54:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:20.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:20.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:21 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:21 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:22 np0005634532 podman[175685]: 2026-03-01 09:54:22.353760283 +0000 UTC m=+0.048943443 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Mar  1 04:54:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:22 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v349: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:54:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:22.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:22.807 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:23 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:23 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:54:23.866 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:54:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:54:23.867 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:54:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:54:23.867 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:54:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:24 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v350: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:54:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:24.799 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:24.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:25 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:25 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:26 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v351: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:54:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:26.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:26.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:27.024Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:54:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:27.025Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:27 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:27 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:28 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095428 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:54:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v352: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 425 B/s rd, 0 op/s
Mar  1 04:54:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:28.803 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:54:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:28.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:54:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:29 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:29 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:30 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v353: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:54:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:30.805 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:30.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:31 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f11040029e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:31 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:54:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:54:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:32 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v354: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:54:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:32.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:32.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:33 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:33 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:34 np0005634532 kernel: SELinux:  Converting 2785 SID table entries...
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:54:34 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:54:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:34 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v355: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:54:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:34.809 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:34.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:35 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:35 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:36 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v356: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:54:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:36.811 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:36.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:37.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:37 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:37 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:54:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:37 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:38 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v357: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 596 B/s wr, 2 op/s
Mar  1 04:54:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:38.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:38.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:39 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:39 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:40 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:54:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:40 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:54:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:40 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:54:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:40 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v358: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Mar  1 04:54:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:40.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:40.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:41 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:41 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:42 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v359: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 596 B/s wr, 1 op/s
Mar  1 04:54:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:42.818 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:42.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:43 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:54:43 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:54:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:44 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v360: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:54:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:44.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:44.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:54:44 np0005634532 kernel: SELinux:  Converting 2785 SID table entries...
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:54:44 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:54:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:45 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:45 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.402452775 +0000 UTC m=+0.065652160 container create 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 04:54:45 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Mar  1 04:54:45 np0005634532 systemd[1]: Started libpod-conmon-47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0.scope.
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.37836871 +0000 UTC m=+0.041568105 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:45 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.511872192 +0000 UTC m=+0.175071567 container init 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.521502304 +0000 UTC m=+0.184701669 container start 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.526174541 +0000 UTC m=+0.189373916 container attach 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:54:45 np0005634532 inspiring_buck[175968]: 167 167
Mar  1 04:54:45 np0005634532 systemd[1]: libpod-47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0.scope: Deactivated successfully.
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.530602463 +0000 UTC m=+0.193801858 container died 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 04:54:45 np0005634532 systemd[1]: var-lib-containers-storage-overlay-dec1fea60de33a7299964c30c2ec624111a627dfd0bcae1de2a80d0466fdbf4e-merged.mount: Deactivated successfully.
Mar  1 04:54:45 np0005634532 podman[175952]: 2026-03-01 09:54:45.58824701 +0000 UTC m=+0.251446365 container remove 47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=inspiring_buck, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 04:54:45 np0005634532 systemd[1]: libpod-conmon-47c2507984c7ec9862a8e7ae377e9c6d8b8a37801696b5c69cd59caad8fc1aa0.scope: Deactivated successfully.
Mar  1 04:54:45 np0005634532 podman[175992]: 2026-03-01 09:54:45.769613734 +0000 UTC m=+0.063182517 container create 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:54:45 np0005634532 systemd[1]: Started libpod-conmon-08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f.scope.
Mar  1 04:54:45 np0005634532 podman[175992]: 2026-03-01 09:54:45.746912624 +0000 UTC m=+0.040481427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:45 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:45 np0005634532 podman[175992]: 2026-03-01 09:54:45.87976959 +0000 UTC m=+0.173338383 container init 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 04:54:45 np0005634532 podman[175992]: 2026-03-01 09:54:45.889519504 +0000 UTC m=+0.183088277 container start 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:54:45 np0005634532 podman[175992]: 2026-03-01 09:54:45.897754521 +0000 UTC m=+0.191323324 container attach 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:54:46 np0005634532 determined_hamilton[176011]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:54:46 np0005634532 determined_hamilton[176011]: --> All data devices are unavailable
Mar  1 04:54:46 np0005634532 systemd[1]: libpod-08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f.scope: Deactivated successfully.
Mar  1 04:54:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:46 np0005634532 podman[176027]: 2026-03-01 09:54:46.283479167 +0000 UTC m=+0.038906208 container died 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:54:46 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5cd2d4f34a808121ef9beae66cc9d0f7917ebb3952433f7a01f70e7b12441a10-merged.mount: Deactivated successfully.
Mar  1 04:54:46 np0005634532 podman[176027]: 2026-03-01 09:54:46.339177416 +0000 UTC m=+0.094604407 container remove 08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_hamilton, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:54:46 np0005634532 systemd[1]: libpod-conmon-08c43c063625c014abd6e10b3b82e0a2fd421e63fe50fd129b0377b95c47b76f.scope: Deactivated successfully.
Mar  1 04:54:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:46 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10ec004000 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v361: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.9 KiB/s rd, 1023 B/s wr, 4 op/s
Mar  1 04:54:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:46.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:46.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:46 np0005634532 podman[176133]: 2026-03-01 09:54:46.94743878 +0000 UTC m=+0.062618393 container create 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:54:46 np0005634532 systemd[1]: Started libpod-conmon-98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026.scope.
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:46.918722069 +0000 UTC m=+0.033901702 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:47.026Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:47 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:47.050617021 +0000 UTC m=+0.165796704 container init 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 04:54:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:47.057686948 +0000 UTC m=+0.172866551 container start 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:47.061265358 +0000 UTC m=+0.176445051 container attach 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:54:47 np0005634532 objective_germain[176150]: 167 167
Mar  1 04:54:47 np0005634532 systemd[1]: libpod-98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026.scope: Deactivated successfully.
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:47.064294804 +0000 UTC m=+0.179474407 container died 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:54:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fbe8757b81d1993a55e51257793950db7c57fc5879ff222ec32b66a30d44e1a4-merged.mount: Deactivated successfully.
Mar  1 04:54:47 np0005634532 podman[176133]: 2026-03-01 09:54:47.105632992 +0000 UTC m=+0.220812595 container remove 98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_germain, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:54:47 np0005634532 systemd[1]: libpod-conmon-98755ae3b111e564df2e98d1ee6d02e8c53eddbd312439ece3585ed4ce3e1026.scope: Deactivated successfully.
Mar  1 04:54:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:47 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:47 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e0003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.285608472 +0000 UTC m=+0.064695366 container create ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 04:54:47 np0005634532 systemd[1]: Started libpod-conmon-ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70.scope.
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.258520392 +0000 UTC m=+0.037607246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:47 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7d1bf98830f7142eebb50ed9b67a53e7bf94801713c583891491556aaafc31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7d1bf98830f7142eebb50ed9b67a53e7bf94801713c583891491556aaafc31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7d1bf98830f7142eebb50ed9b67a53e7bf94801713c583891491556aaafc31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d7d1bf98830f7142eebb50ed9b67a53e7bf94801713c583891491556aaafc31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.402402135 +0000 UTC m=+0.181489009 container init ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.412922549 +0000 UTC m=+0.192009433 container start ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.435538937 +0000 UTC m=+0.214625801 container attach ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 04:54:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:54:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:54:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:54:47 np0005634532 zen_euler[176189]: {
Mar  1 04:54:47 np0005634532 zen_euler[176189]:    "0": [
Mar  1 04:54:47 np0005634532 zen_euler[176189]:        {
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "devices": [
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "/dev/loop3"
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            ],
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "lv_name": "ceph_lv0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "lv_size": "21470642176",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "name": "ceph_lv0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "tags": {
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.cluster_name": "ceph",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.crush_device_class": "",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.encrypted": "0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.osd_id": "0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.type": "block",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.vdo": "0",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:                "ceph.with_tpm": "0"
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            },
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "type": "block",
Mar  1 04:54:47 np0005634532 zen_euler[176189]:            "vg_name": "ceph_vg0"
Mar  1 04:54:47 np0005634532 zen_euler[176189]:        }
Mar  1 04:54:47 np0005634532 zen_euler[176189]:    ]
Mar  1 04:54:47 np0005634532 zen_euler[176189]: }
Mar  1 04:54:47 np0005634532 systemd[1]: libpod-ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70.scope: Deactivated successfully.
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.719831206 +0000 UTC m=+0.498918060 container died ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:54:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6d7d1bf98830f7142eebb50ed9b67a53e7bf94801713c583891491556aaafc31-merged.mount: Deactivated successfully.
Mar  1 04:54:47 np0005634532 podman[176173]: 2026-03-01 09:54:47.775693468 +0000 UTC m=+0.554780322 container remove ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_euler, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:54:47 np0005634532 systemd[1]: libpod-conmon-ef71c87d450f0eb56e9235408ecd5aa4fdc9938a794657277ad468f9ed125c70.scope: Deactivated successfully.
Mar  1 04:54:47 np0005634532 podman[176200]: 2026-03-01 09:54:47.897424445 +0000 UTC m=+0.139945825 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.385552913 +0000 UTC m=+0.050340635 container create 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:54:48 np0005634532 systemd[1]: Started libpod-conmon-089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830.scope.
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.359366415 +0000 UTC m=+0.024154107 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.481750478 +0000 UTC m=+0.146538210 container init 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.492296533 +0000 UTC m=+0.157084215 container start 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:54:48 np0005634532 great_wescoff[176350]: 167 167
Mar  1 04:54:48 np0005634532 systemd[1]: libpod-089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830.scope: Deactivated successfully.
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.501239598 +0000 UTC m=+0.166027330 container attach 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.501668438 +0000 UTC m=+0.166456110 container died 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:54:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6675765699af4199606d0f73c14cb7eb1961600bd7ca529a3cbb3bed4cb42019-merged.mount: Deactivated successfully.
Mar  1 04:54:48 np0005634532 podman[176334]: 2026-03-01 09:54:48.550500315 +0000 UTC m=+0.215287987 container remove 089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Mar  1 04:54:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:48 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:54:48 np0005634532 systemd[1]: libpod-conmon-089cbb43f079360854e6256ed4ef0e8d5b2a12e8769b3fcdd4c5863ef3a87830.scope: Deactivated successfully.
Mar  1 04:54:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095448 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:54:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v362: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.0 KiB/s rd, 1023 B/s wr, 4 op/s
Mar  1 04:54:48 np0005634532 podman[176374]: 2026-03-01 09:54:48.69606896 +0000 UTC m=+0.063082695 container create f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 04:54:48 np0005634532 systemd[1]: Started libpod-conmon-f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084.scope.
Mar  1 04:54:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:54:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d643188b5d8808a3f26ca958c7870d5ebbc8217a850225433c4063e2ed46b883/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d643188b5d8808a3f26ca958c7870d5ebbc8217a850225433c4063e2ed46b883/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d643188b5d8808a3f26ca958c7870d5ebbc8217a850225433c4063e2ed46b883/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d643188b5d8808a3f26ca958c7870d5ebbc8217a850225433c4063e2ed46b883/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:54:48 np0005634532 podman[176374]: 2026-03-01 09:54:48.672701943 +0000 UTC m=+0.039715718 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:54:48 np0005634532 podman[176374]: 2026-03-01 09:54:48.780686325 +0000 UTC m=+0.147700090 container init f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:54:48 np0005634532 podman[176374]: 2026-03-01 09:54:48.78607698 +0000 UTC m=+0.153090725 container start f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 04:54:48 np0005634532 podman[176374]: 2026-03-01 09:54:48.789768703 +0000 UTC m=+0.156782458 container attach f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:54:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:48.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:48.848 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[170587]: 01/03/2026 09:54:49 : epoch 69a40ca7 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f10e8004050 fd 38 proxy ignored for local
Mar  1 04:54:49 np0005634532 kernel: ganesha.nfsd[175749]: segfault at 50 ip 00007f1193bb032e sp 00007f10f77fd210 error 4 in libntirpc.so.5.8[7f1193b95000+2c000] likely on CPU 2 (core 0, socket 2)
Mar  1 04:54:49 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:54:49 np0005634532 systemd[1]: Started Process Core Dump (PID 176437/UID 0).
Mar  1 04:54:49 np0005634532 lvm[176467]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:54:49 np0005634532 lvm[176467]: VG ceph_vg0 finished
Mar  1 04:54:49 np0005634532 jolly_hertz[176391]: {}
Mar  1 04:54:49 np0005634532 systemd[1]: libpod-f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084.scope: Deactivated successfully.
Mar  1 04:54:49 np0005634532 systemd[1]: libpod-f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084.scope: Consumed 1.095s CPU time.
Mar  1 04:54:49 np0005634532 podman[176374]: 2026-03-01 09:54:49.877490486 +0000 UTC m=+1.244504221 container died f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:54:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d643188b5d8808a3f26ca958c7870d5ebbc8217a850225433c4063e2ed46b883-merged.mount: Deactivated successfully.
Mar  1 04:54:49 np0005634532 podman[176374]: 2026-03-01 09:54:49.950471128 +0000 UTC m=+1.317484873 container remove f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 04:54:49 np0005634532 systemd[1]: libpod-conmon-f62f241abcba4dc1482d6048a89ba7851f35aa2abc5f28dd421c0ba3ccbff084.scope: Deactivated successfully.
Mar  1 04:54:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:54:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:54:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:50 np0005634532 systemd-coredump[176441]: Process 170597 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 58:#012#0  0x00007f1193bb032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 04:54:50 np0005634532 systemd[1]: systemd-coredump@6-176437-0.service: Deactivated successfully.
Mar  1 04:54:50 np0005634532 systemd[1]: systemd-coredump@6-176437-0.service: Consumed 1.077s CPU time.
Mar  1 04:54:50 np0005634532 podman[176514]: 2026-03-01 09:54:50.50286684 +0000 UTC m=+0.049055093 container died 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:54:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d0d72e5b884de398cf35c05c1130f85712df884ecccb01c9f7574fc3bf53d688-merged.mount: Deactivated successfully.
Mar  1 04:54:50 np0005634532 podman[176514]: 2026-03-01 09:54:50.558290361 +0000 UTC m=+0.104478624 container remove 2763790c977ef6a803af28f5fd3d9ad157ba86f5334d7933623a1e3de6a9433b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:54:50 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:54:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v363: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:54:50 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:54:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:50.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:50.850 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:54:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v364: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:54:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:52.827 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:52.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:53 np0005634532 podman[176587]: 2026-03-01 09:54:53.411255381 +0000 UTC m=+0.087914668 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223)
Mar  1 04:54:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v365: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:54:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:54.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:54.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:54:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095455 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:54:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:54:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v366: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:54:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:56.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:56.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:54:57.028Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:54:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:57] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:54:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:54:57] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:54:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v367: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:54:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:54:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:54:58.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:54:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:54:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:54:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:54:58.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v368: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:55:00 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 7.
Mar  1 04:55:00 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:55:00 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:55:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:00.836 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:00.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:00 np0005634532 podman[179734]: 2026-03-01 09:55:00.983091576 +0000 UTC m=+0.040719563 container create c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:55:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78707e483ddb6dfd19f44f06dbb158ca080746a935f33bcdb868c16b2fcbcdcc/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78707e483ddb6dfd19f44f06dbb158ca080746a935f33bcdb868c16b2fcbcdcc/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78707e483ddb6dfd19f44f06dbb158ca080746a935f33bcdb868c16b2fcbcdcc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78707e483ddb6dfd19f44f06dbb158ca080746a935f33bcdb868c16b2fcbcdcc/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:01 np0005634532 podman[179734]: 2026-03-01 09:55:01.0310465 +0000 UTC m=+0.088674507 container init c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 04:55:01 np0005634532 podman[179734]: 2026-03-01 09:55:01.035747508 +0000 UTC m=+0.093375505 container start c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:55:01 np0005634532 bash[179734]: c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d
Mar  1 04:55:01 np0005634532 podman[179734]: 2026-03-01 09:55:00.965850823 +0000 UTC m=+0.023478830 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:55:01 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:55:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:55:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:55:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:55:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v369: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:55:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:02.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:55:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:02.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:55:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v370: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:04.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:04.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v371: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:55:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:06.842 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:55:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:06.875 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:07.029Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:55:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:55:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:55:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v372: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:55:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:08.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:08.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v373: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:55:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:10.846 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:10.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v374: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:55:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:12.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:12.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4001970 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:14 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v375: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:55:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:14.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:14.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095515 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:55:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:16 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v376: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:55:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:16.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:16.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:17.030Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:55:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:17.031Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:55:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:55:17
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'backups', 'default.rgw.log', 'default.rgw.meta', '.mgr', 'vms', '.rgw.root', '.nfs']
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:55:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:55:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:55:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:55:18 np0005634532 podman[193263]: 2026-03-01 09:55:18.425355384 +0000 UTC m=+0.110258260 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:55:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:18 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001bd0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v377: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:55:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:18.852 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:18.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:19 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 04:55:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:20 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v378: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:20.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:20.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:21 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:21 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:22 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v379: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:22.856 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:22.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00089d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:55:23.867 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:55:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:55:23.867 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:55:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:55:23.867 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:55:24 np0005634532 podman[193714]: 2026-03-01 09:55:24.381535869 +0000 UTC m=+0.061131806 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Mar  1 04:55:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:24 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v380: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:24.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:24.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:26 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v381: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:55:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000026s ======
Mar  1 04:55:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:26.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000026s
Mar  1 04:55:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:26.907 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:27.032Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:55:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:55:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:55:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:28 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v382: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:55:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:55:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:28.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:55:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:55:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:28.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:55:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:30 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v383: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:55:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:30.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:55:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:30.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:55:30 np0005634532 kernel: SELinux:  Converting 2786 SID table entries...
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability network_peer_controls=1
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability open_perms=1
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability extended_socket_class=1
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability always_check_network=0
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability cgroup_seclabel=1
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Mar  1 04:55:31 np0005634532 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Mar  1 04:55:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00096e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:31 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:55:31 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Mar  1 04:55:31 np0005634532 dbus-broker-launch[822]: Noticed file-system modification, trigger reload.
Mar  1 04:55:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:55:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:55:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:32 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v384: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:55:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095532 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:55:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:55:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:32.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:55:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:32.918 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:33 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:33 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:34 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v385: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:55:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:34.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:34.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:35 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:35 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:36 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v386: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:55:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:36.871 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:36.924 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:37.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:37.034Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:37.034Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:37 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:37 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=cleanup t=2026-03-01T09:55:38.567901161Z level=info msg="Completed cleanup jobs" duration=8.568032ms
Mar  1 04:55:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:38 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v387: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugins.update.checker t=2026-03-01T09:55:38.660711226Z level=info msg="Update check succeeded" duration=44.785677ms
Mar  1 04:55:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana.update.checker t=2026-03-01T09:55:38.672374645Z level=info msg="Update check succeeded" duration=43.351542ms
Mar  1 04:55:38 np0005634532 systemd[1]: Stopping OpenSSH server daemon...
Mar  1 04:55:38 np0005634532 systemd[1]: sshd.service: Deactivated successfully.
Mar  1 04:55:38 np0005634532 systemd[1]: Stopped OpenSSH server daemon.
Mar  1 04:55:38 np0005634532 systemd[1]: sshd.service: Consumed 9.983s CPU time, read 32.0K from disk, written 0B to disk.
Mar  1 04:55:38 np0005634532 systemd[1]: Stopped target sshd-keygen.target.
Mar  1 04:55:38 np0005634532 systemd[1]: Stopping sshd-keygen.target...
Mar  1 04:55:38 np0005634532 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:55:38 np0005634532 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:55:38 np0005634532 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Mar  1 04:55:38 np0005634532 systemd[1]: Reached target sshd-keygen.target.
Mar  1 04:55:38 np0005634532 systemd[1]: Starting OpenSSH server daemon...
Mar  1 04:55:38 np0005634532 systemd[1]: Started OpenSSH server daemon.
Mar  1 04:55:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:38.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:39 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:39 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:40 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:55:40 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:55:40 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:40 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:40 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:40 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v388: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:40 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:55:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:40.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:40.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:41 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:41 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:42 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:55:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:42 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v389: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:55:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:42.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:42.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:43 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:43 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:44 np0005634532 python3.9[199296]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:55:44 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:44 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:44 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:44 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da80016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v390: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:55:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:44.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:44.937 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:55:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:55:45 np0005634532 python3.9[200660]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:55:45 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:45 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:45 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:46 np0005634532 python3.9[201995]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:55:46 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:46 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:46 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v391: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:55:46 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:46.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:46.941 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:47.035Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:55:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:55:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:47 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:47 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:55:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:55:47 np0005634532 python3.9[203330]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:47 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:55:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:55:47 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:47 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:48 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:55:48 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:55:48 np0005634532 systemd[1]: man-db-cache-update.service: Consumed 8.908s CPU time.
Mar  1 04:55:48 np0005634532 systemd[1]: run-r03d7d0c3225e48d781817f322e835f8d.service: Deactivated successfully.
Mar  1 04:55:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:48 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:55:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:48 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v392: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:55:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:48.881 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:48.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:49 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:49 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:49 np0005634532 podman[204287]: 2026-03-01 09:55:49.395113457 +0000 UTC m=+0.077478657 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 04:55:49 np0005634532 python3.9[204366]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:49 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:49 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:49 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:50 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v393: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:55:50 np0005634532 python3.9[204589]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:50 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:50 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:50 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:50.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 04:55:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:50.947 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:51 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:51 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:51 np0005634532 python3.9[204845]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:51 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:51 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:51 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:52 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4002290 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v394: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:55:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095552 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:55:52 np0005634532 python3.9[205070]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:52.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:52.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 04:55:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 04:55:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 04:55:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:53 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 python3.9[205226]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:53 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:53 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:53 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:53 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:53 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.21303322 +0000 UTC m=+0.035134420 container create acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 04:55:54 np0005634532 systemd[1]: Started libpod-conmon-acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0.scope.
Mar  1 04:55:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.287304618 +0000 UTC m=+0.109405868 container init acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.196757308 +0000 UTC m=+0.018858528 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.292944977 +0000 UTC m=+0.115046177 container start acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.296592687 +0000 UTC m=+0.118693937 container attach acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:55:54 np0005634532 romantic_proskuriakova[205404]: 167 167
Mar  1 04:55:54 np0005634532 systemd[1]: libpod-acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0.scope: Deactivated successfully.
Mar  1 04:55:54 np0005634532 conmon[205404]: conmon acc15f5fcc711a01d05c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0.scope/container/memory.events
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.298849613 +0000 UTC m=+0.120950813 container died acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:55:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-76ebb909e40203d5db43451a18405d067e065874eb4fcc10a6841ef458d736f3-merged.mount: Deactivated successfully.
Mar  1 04:55:54 np0005634532 podman[205388]: 2026-03-01 09:55:54.340102144 +0000 UTC m=+0.162203344 container remove acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_proskuriakova, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:55:54 np0005634532 systemd[1]: libpod-conmon-acc15f5fcc711a01d05cbde3766790d7e25a311d07a59fc12464a7f0c9de16d0.scope: Deactivated successfully.
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.499440745 +0000 UTC m=+0.053655508 container create 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:55:54 np0005634532 systemd[1]: Started libpod-conmon-9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191.scope.
Mar  1 04:55:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.472597081 +0000 UTC m=+0.026811924 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.581198687 +0000 UTC m=+0.135413480 container init 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.592335333 +0000 UTC m=+0.146550086 container start 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.595968933 +0000 UTC m=+0.150183736 container attach 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:55:54 np0005634532 podman[205445]: 2026-03-01 09:55:54.618721465 +0000 UTC m=+0.072746300 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Mar  1 04:55:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:54 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v395: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 04:55:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:54.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:54 np0005634532 hardcore_ishizaka[205448]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:55:54 np0005634532 hardcore_ishizaka[205448]: --> All data devices are unavailable
Mar  1 04:55:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:54.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:54 np0005634532 systemd[1]: libpod-9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191.scope: Deactivated successfully.
Mar  1 04:55:54 np0005634532 podman[205431]: 2026-03-01 09:55:54.969731598 +0000 UTC m=+0.523946351 container died 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 04:55:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0523cf9290f9b4dd6a2a0ace3c3053e9c5e59dade8bfa5ec7458baa5153e2816-merged.mount: Deactivated successfully.
Mar  1 04:55:55 np0005634532 podman[205431]: 2026-03-01 09:55:55.015383027 +0000 UTC m=+0.569597780 container remove 9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hardcore_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:55:55 np0005634532 systemd[1]: libpod-conmon-9ce5cf55a9b22e2853e973e5b5f9c32bdc53b021cd3d2bd2590879f357489191.scope: Deactivated successfully.
Mar  1 04:55:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:55 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:55 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.520165863 +0000 UTC m=+0.039047197 container create b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 04:55:55 np0005634532 systemd[1]: Started libpod-conmon-b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615.scope.
Mar  1 04:55:55 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.586095114 +0000 UTC m=+0.104976468 container init b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.591516768 +0000 UTC m=+0.110398122 container start b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 04:55:55 np0005634532 condescending_grothendieck[205607]: 167 167
Mar  1 04:55:55 np0005634532 systemd[1]: libpod-b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615.scope: Deactivated successfully.
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.595113147 +0000 UTC m=+0.113994501 container attach b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.595588829 +0000 UTC m=+0.114470163 container died b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.506851304 +0000 UTC m=+0.025732668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-28f7db7445af3e7cb2c56cd28858fb7b66872b26a5f8f7b77a37b820eda40efd-merged.mount: Deactivated successfully.
Mar  1 04:55:55 np0005634532 podman[205584]: 2026-03-01 09:55:55.631067067 +0000 UTC m=+0.149948401 container remove b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_grothendieck, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 04:55:55 np0005634532 systemd[1]: libpod-conmon-b20e699c3e5d6a7672d6b2a6410a438c7a24097099536ac9b82977c90204b615.scope: Deactivated successfully.
Mar  1 04:55:55 np0005634532 podman[205705]: 2026-03-01 09:55:55.754134701 +0000 UTC m=+0.045934678 container create c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:55:55 np0005634532 systemd[1]: Started libpod-conmon-c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3.scope.
Mar  1 04:55:55 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e5354725716018d5c9a5b7ccdbbfe94f7327dd807e5eb17fbb05e83cc4c3ff/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e5354725716018d5c9a5b7ccdbbfe94f7327dd807e5eb17fbb05e83cc4c3ff/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e5354725716018d5c9a5b7ccdbbfe94f7327dd807e5eb17fbb05e83cc4c3ff/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:55 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e5354725716018d5c9a5b7ccdbbfe94f7327dd807e5eb17fbb05e83cc4c3ff/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:55 np0005634532 podman[205705]: 2026-03-01 09:55:55.732895475 +0000 UTC m=+0.024695482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:55 np0005634532 podman[205705]: 2026-03-01 09:55:55.835985925 +0000 UTC m=+0.127785912 container init c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 04:55:55 np0005634532 podman[205705]: 2026-03-01 09:55:55.841142373 +0000 UTC m=+0.132942340 container start c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 04:55:55 np0005634532 podman[205705]: 2026-03-01 09:55:55.844638849 +0000 UTC m=+0.136438826 container attach c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:55:56 np0005634532 python3.9[205769]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Mar  1 04:55:56 np0005634532 zen_buck[205770]: {
Mar  1 04:55:56 np0005634532 zen_buck[205770]:    "0": [
Mar  1 04:55:56 np0005634532 zen_buck[205770]:        {
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "devices": [
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "/dev/loop3"
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            ],
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "lv_name": "ceph_lv0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "lv_size": "21470642176",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "name": "ceph_lv0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "tags": {
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.cluster_name": "ceph",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.crush_device_class": "",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.encrypted": "0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.osd_id": "0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.type": "block",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.vdo": "0",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:                "ceph.with_tpm": "0"
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            },
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "type": "block",
Mar  1 04:55:56 np0005634532 zen_buck[205770]:            "vg_name": "ceph_vg0"
Mar  1 04:55:56 np0005634532 zen_buck[205770]:        }
Mar  1 04:55:56 np0005634532 zen_buck[205770]:    ]
Mar  1 04:55:56 np0005634532 zen_buck[205770]: }
Mar  1 04:55:56 np0005634532 systemd[1]: libpod-c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3.scope: Deactivated successfully.
Mar  1 04:55:56 np0005634532 podman[205705]: 2026-03-01 09:55:56.109589553 +0000 UTC m=+0.401389540 container died c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 04:55:56 np0005634532 systemd[1]: Reloading.
Mar  1 04:55:56 np0005634532 podman[205705]: 2026-03-01 09:55:56.154635877 +0000 UTC m=+0.446435834 container remove c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_buck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:55:56 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:55:56 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:55:56 np0005634532 systemd[1]: var-lib-containers-storage-overlay-09e5354725716018d5c9a5b7ccdbbfe94f7327dd807e5eb17fbb05e83cc4c3ff-merged.mount: Deactivated successfully.
Mar  1 04:55:56 np0005634532 systemd[1]: libpod-conmon-c03e29d028cee04ada5c076eebf7a2dbbd8ed84b188405fb46e2709ecad80ef3.scope: Deactivated successfully.
Mar  1 04:55:56 np0005634532 systemd[1]: Listening on libvirt proxy daemon socket.
Mar  1 04:55:56 np0005634532 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Mar  1 04:55:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:55:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:56 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v396: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.792145357 +0000 UTC m=+0.045301122 container create 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:55:56 np0005634532 systemd[1]: Started libpod-conmon-1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1.scope.
Mar  1 04:55:56 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.862726093 +0000 UTC m=+0.115881858 container init 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.869331936 +0000 UTC m=+0.122487721 container start 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.873774256 +0000 UTC m=+0.126930021 container attach 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.778166461 +0000 UTC m=+0.031322246 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:56 np0005634532 reverent_moore[206023]: 167 167
Mar  1 04:55:56 np0005634532 systemd[1]: libpod-1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1.scope: Deactivated successfully.
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.874948615 +0000 UTC m=+0.128104380 container died 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:55:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:56.887 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:56 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6ff79c80d4e4fab0cd02f2e820f458a57dd5132219259a3b93b87089b126097f-merged.mount: Deactivated successfully.
Mar  1 04:55:56 np0005634532 podman[205957]: 2026-03-01 09:55:56.913223642 +0000 UTC m=+0.166379407 container remove 1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:55:56 np0005634532 systemd[1]: libpod-conmon-1bb6b3347f92ae2af51f5be9013b9bdbedc52a84589bab3a87a51ca4f5cab4b1.scope: Deactivated successfully.
Mar  1 04:55:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:56.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.0336167 +0000 UTC m=+0.036544455 container create abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 04:55:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:55:57.039Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:55:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:57] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Mar  1 04:55:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:55:57] "GET /metrics HTTP/1.1" 200 48349 "" "Prometheus/2.51.0"
Mar  1 04:55:57 np0005634532 systemd[1]: Started libpod-conmon-abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556.scope.
Mar  1 04:55:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:55:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435cff1c280db973d3a5fb81f4031d69c4dd96a6c61c36eced2ac477c725989d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435cff1c280db973d3a5fb81f4031d69c4dd96a6c61c36eced2ac477c725989d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435cff1c280db973d3a5fb81f4031d69c4dd96a6c61c36eced2ac477c725989d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435cff1c280db973d3a5fb81f4031d69c4dd96a6c61c36eced2ac477c725989d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.01869528 +0000 UTC m=+0.021623055 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.118454758 +0000 UTC m=+0.121382543 container init abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.123443131 +0000 UTC m=+0.126370886 container start abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.127343508 +0000 UTC m=+0.130271263 container attach abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:55:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:57 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:57 np0005634532 python3.9[206121]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:57 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:57 np0005634532 lvm[206354]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:55:57 np0005634532 lvm[206354]: VG ceph_vg0 finished
Mar  1 04:55:57 np0005634532 intelligent_driscoll[206139]: {}
Mar  1 04:55:57 np0005634532 systemd[1]: libpod-abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556.scope: Deactivated successfully.
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.743675842 +0000 UTC m=+0.746603647 container died abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 04:55:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-435cff1c280db973d3a5fb81f4031d69c4dd96a6c61c36eced2ac477c725989d-merged.mount: Deactivated successfully.
Mar  1 04:55:57 np0005634532 podman[206123]: 2026-03-01 09:55:57.783495117 +0000 UTC m=+0.786422872 container remove abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_driscoll, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:55:57 np0005634532 systemd[1]: libpod-conmon-abe4ff7337386ef487a27ecd8d247dc6b674a26ddb6afa03218084b17e570556.scope: Deactivated successfully.
Mar  1 04:55:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:55:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:55:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:57 np0005634532 python3.9[206370]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:58 np0005634532 python3.9[206565]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:58 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v397: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:55:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:55:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:55:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:55:58.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:55:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:55:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:55:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:55:58.960 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:55:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:59 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:59 np0005634532 python3.9[206721]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:55:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:55:59 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:55:59 np0005634532 python3.9[206877]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:00 np0005634532 python3.9[207035]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:00 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v398: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:56:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:00.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:00.964 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:01 np0005634532 python3.9[207191]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac002f00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:01 np0005634532 python3.9[207347]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:56:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:56:02 np0005634532 python3.9[207505]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:02 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v399: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:56:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:02.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:02.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:03 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:03 np0005634532 python3.9[207661]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:03 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:04 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v400: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:56:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:04.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:04 np0005634532 python3.9[207821]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095604 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:56:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:04.968 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:05 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:05 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:05 np0005634532 python3.9[207977]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:06 np0005634532 python3.9[208134]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:06 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v401: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000049s ======
Mar  1 04:56:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:06.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Mar  1 04:56:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:06.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:07.041Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:07] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Mar  1 04:56:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:07] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Mar  1 04:56:07 np0005634532 python3.9[208291]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Mar  1 04:56:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v402: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:56:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:08 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:08.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:08.974 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:09 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:09 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v403: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:56:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:10 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:10.901 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:10.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:11 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:11 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:12 np0005634532 python3.9[208452]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v404: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:56:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:12 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:12 np0005634532 python3.9[208631]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:12 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:56:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:12.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:12.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:13 np0005634532 python3.9[208784]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:13 np0005634532 python3.9[208937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:14 np0005634532 python3.9[209092]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v405: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:56:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:14 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:14.905 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:14 np0005634532 python3.9[209245]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:56:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:14.983 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd000a3f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:56:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:56:15 np0005634532 python3.9[209395]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:56:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v406: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:56:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:16 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:16 np0005634532 python3.9[209552]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:16.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:16.985 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:17.042Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:17] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:17] "GET /metrics HTTP/1.1" 200 48340 "" "Prometheus/2.51.0"
Mar  1 04:56:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8000fc0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:56:17
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['vms', 'default.rgw.meta', '.nfs', 'images', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'volumes', '.rgw.root', 'default.rgw.log']
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:56:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:56:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:56:17 np0005634532 python3.9[209678]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358976.2183847-1641-10626454695445/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:56:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:56:18 np0005634532 python3.9[209832]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:18 np0005634532 python3.9[209959]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358977.6428633-1641-75639111809844/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v407: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:56:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:18 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:18 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:56:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:18.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:18 np0005634532 python3.9[210112]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:18.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:19 np0005634532 python3.9[210238]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358978.5756307-1641-120069304981662/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:19 np0005634532 podman[210239]: 2026-03-01 09:56:19.507942413 +0000 UTC m=+0.069559682 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Mar  1 04:56:19 np0005634532 python3.9[210418]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:20 np0005634532 python3.9[210546]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358979.5393965-1641-133107888905868/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v408: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:56:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:20 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8001a90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:20 np0005634532 python3.9[210699]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:20.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:20.991 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:21 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:21 np0005634532 python3.9[210825]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358980.4682484-1641-233395695546201/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:21 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:21 np0005634532 python3.9[210980]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:22 np0005634532 python3.9[211107]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358981.3893857-1641-181710827134546/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v409: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 04:56:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:22 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:22 np0005634532 python3.9[211261]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:22.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:22.995 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:23 np0005634532 python3.9[211385]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358982.325189-1641-117669030474638/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8001c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:23 np0005634532 python3.9[211538]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:56:23.869 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:56:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:56:23.869 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:56:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:56:23.870 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:56:24 np0005634532 python3.9[211668]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1772358983.2384474-1641-68138952048112/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v410: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:56:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:24 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8001c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:24.913 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095624 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:56:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:24.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:25 np0005634532 podman[211693]: 2026-03-01 09:56:25.373991032 +0000 UTC m=+0.069168292 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Mar  1 04:56:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v411: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:56:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:26 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8001c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:26 np0005634532 python3.9[211843]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Mar  1 04:56:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:26.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:27.001 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:27.043Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:27.044Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:27.044Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:27] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:56:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:27] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:27 np0005634532 python3.9[211997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:27 np0005634532 python3.9[212151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:28 np0005634532 python3.9[212305]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v412: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:56:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:28 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:28.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:29.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:29 np0005634532 python3.9[212458]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8002d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:29 np0005634532 python3.9[212611]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:30 np0005634532 python3.9[212766]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v413: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:56:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:30 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:30 np0005634532 python3.9[212919]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:30.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:31.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8002d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:31 np0005634532 python3.9[213072]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:32 np0005634532 python3.9[213226]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:56:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:56:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v414: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:56:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:32 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:32 np0005634532 python3.9[213380]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:32.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:33.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:33 np0005634532 python3.9[213558]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:33 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003c10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:33 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:34 np0005634532 python3.9[213712]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:34 np0005634532 python3.9[213866]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v415: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:56:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:34 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8002d30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:34.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:35.013 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:35 np0005634532 python3.9[214019]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:35 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:35 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:35 np0005634532 python3.9[214173]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:36 np0005634532 python3.9[214298]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358995.5531216-2304-249300707009089/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v416: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:36 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:36.926 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:37.015 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:37.045Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:56:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:37] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:56:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:37 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:37 np0005634532 python3.9[214451]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:37 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:37 np0005634532 python3.9[214575]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358996.6003346-2304-254572275233938/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:38 np0005634532 python3.9[214729]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v417: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:38 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:38 np0005634532 python3.9[214854]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358997.8872683-2304-151608222832383/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:38.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:39.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:39 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:39 np0005634532 python3.9[215007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:39 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:39 np0005634532 python3.9[215131]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358998.8920314-2304-22411561667297/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:40 np0005634532 python3.9[215286]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v418: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:40 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:40 np0005634532 python3.9[215410]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772358999.890664-2304-196693429955578/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:40.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:41.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:41 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003df0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:41 np0005634532 python3.9[215563]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:41 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:41 np0005634532 python3.9[215687]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359000.9151597-2304-191886964379917/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:42 np0005634532 python3.9[215842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v419: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:42 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:42 np0005634532 python3.9[215966]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359001.9751284-2304-192521458121875/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:42.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:43.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:43 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:43 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e10 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:43 np0005634532 python3.9[216119]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:43 np0005634532 python3.9[216243]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359003.0103226-2304-251840289015235/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:44 np0005634532 python3.9[216398]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v420: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:56:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:44 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:44.931 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:45.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:45 np0005634532 python3.9[216522]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359004.0191114-2304-169718082090538/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:45 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:45 np0005634532 python3.9[216675]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:46 np0005634532 python3.9[216800]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359005.2342644-2304-83142648980661/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v421: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:46 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:46.933 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:46 np0005634532 python3.9[216954]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:47.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:47.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:47.046Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:47.046Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:47] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:47 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8004580 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:47 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:47 np0005634532 python3.9[217078]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359006.5321343-2304-110112039614025/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:56:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:56:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:56:47 np0005634532 python3.9[217231]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:48 np0005634532 python3.9[217357]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359007.590376-2304-248210703685810/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v422: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:48 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:48 np0005634532 python3.9[217512]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:48.935 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:49.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:49 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:49 np0005634532 python3.9[217636]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359008.5366585-2304-88945005498945/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:49 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:49 np0005634532 podman[217760]: 2026-03-01 09:56:49.78087135 +0000 UTC m=+0.107335354 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Mar  1 04:56:49 np0005634532 python3.9[217806]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:56:50 np0005634532 python3.9[217942]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359009.4926088-2304-48010778445723/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v423: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:50 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:50.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:51.037 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:51 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:51 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v424: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:52 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:52.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:53.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:53 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:53 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:53 np0005634532 python3.9[218119]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:56:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v425: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:56:54 np0005634532 python3.9[218277]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Mar  1 04:56:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:54 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:54.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:55.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:55 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:55 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:55 np0005634532 dbus-broker-launch[823]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Mar  1 04:56:55 np0005634532 podman[218283]: 2026-03-01 09:56:55.927598376 +0000 UTC m=+0.046475578 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Mar  1 04:56:56 np0005634532 python3.9[218455]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:56:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v426: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:56 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:56 np0005634532 python3.9[218608]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:56:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:56.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:56:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:56:57.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:56:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:56:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:57.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:56:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 04:56:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:56:57] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 04:56:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:57 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd0001320 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:57 np0005634532 python3.9[218761]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:57 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:57 np0005634532 python3.9[218914]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:58 np0005634532 python3.9[219115]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v427: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:56:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:58 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:56:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:56:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:56:58.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:56:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:56:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:56:59.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.19396713 +0000 UTC m=+0.040310709 container create e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 04:56:59 np0005634532 systemd[1]: Started libpod-conmon-e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9.scope.
Mar  1 04:56:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:56:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:59 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.257336157 +0000 UTC m=+0.103679756 container init e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.263072456 +0000 UTC m=+0.109416035 container start e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.266431317 +0000 UTC m=+0.112774916 container attach e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:56:59 np0005634532 laughing_hoover[219391]: 167 167
Mar  1 04:56:59 np0005634532 systemd[1]: libpod-e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9.scope: Deactivated successfully.
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.267635246 +0000 UTC m=+0.113978825 container died e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.175561413 +0000 UTC m=+0.021905012 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:56:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7ac7f7918d0c52298d2d3375e5b34fb9c85dc4b7b4513fb7089e00dabfddd615-merged.mount: Deactivated successfully.
Mar  1 04:56:59 np0005634532 podman[219341]: 2026-03-01 09:56:59.30778076 +0000 UTC m=+0.154124339 container remove e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=laughing_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 04:56:59 np0005634532 systemd[1]: libpod-conmon-e6ff579c0b1fed0dfb2b0d6fe41e991fff7f96b80a1144926c3672ac577e88d9.scope: Deactivated successfully.
Mar  1 04:56:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:56:59 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.424313336 +0000 UTC m=+0.042672866 container create a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 04:56:59 np0005634532 python3.9[219415]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:56:59 np0005634532 systemd[1]: Started libpod-conmon-a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480.scope.
Mar  1 04:56:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:56:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:56:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:56:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:56:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:56:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.402779984 +0000 UTC m=+0.021139544 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.504637174 +0000 UTC m=+0.122996724 container init a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.511653374 +0000 UTC m=+0.130012904 container start a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.51850695 +0000 UTC m=+0.136866480 container attach a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:56:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:56:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:56:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:56:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:56:59 np0005634532 happy_bose[219451]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:56:59 np0005634532 happy_bose[219451]: --> All data devices are unavailable
Mar  1 04:56:59 np0005634532 systemd[1]: libpod-a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480.scope: Deactivated successfully.
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.797135158 +0000 UTC m=+0.415494688 container died a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 04:56:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-90c32b7394184a099ba970b818905022ab60d9c3af374038d58820b72d7058f2-merged.mount: Deactivated successfully.
Mar  1 04:56:59 np0005634532 podman[219435]: 2026-03-01 09:56:59.8306261 +0000 UTC m=+0.448985630 container remove a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=happy_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:56:59 np0005634532 systemd[1]: libpod-conmon-a41e961a721667d77961c3d4a1389ad91948298ecc2ae0f50e038f9578ea6480.scope: Deactivated successfully.
Mar  1 04:57:00 np0005634532 python3.9[219652]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.307168647 +0000 UTC m=+0.037948742 container create aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:57:00 np0005634532 systemd[1]: Started libpod-conmon-aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f.scope.
Mar  1 04:57:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.380216348 +0000 UTC m=+0.110996463 container init aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.289427226 +0000 UTC m=+0.020207341 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.388694804 +0000 UTC m=+0.119474899 container start aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.39225155 +0000 UTC m=+0.123031645 container attach aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 04:57:00 np0005634532 jolly_galois[219858]: 167 167
Mar  1 04:57:00 np0005634532 systemd[1]: libpod-aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f.scope: Deactivated successfully.
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.395811866 +0000 UTC m=+0.126591981 container died aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:57:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0816526f715adf3db9014a4a5e308c487c6af7ad87852d86e5e9a875fe90d2a2-merged.mount: Deactivated successfully.
Mar  1 04:57:00 np0005634532 podman[219798]: 2026-03-01 09:57:00.437374304 +0000 UTC m=+0.168154389 container remove aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_galois, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:57:00 np0005634532 systemd[1]: libpod-conmon-aba5bfa820037f5186d03dac10053b8ee923b112d2aed860d958413d62fa387f.scope: Deactivated successfully.
Mar  1 04:57:00 np0005634532 python3.9[219910]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:00 np0005634532 podman[219918]: 2026-03-01 09:57:00.610132474 +0000 UTC m=+0.083294451 container create 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:57:00 np0005634532 podman[219918]: 2026-03-01 09:57:00.590478098 +0000 UTC m=+0.063640085 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:57:00 np0005634532 systemd[1]: Started libpod-conmon-202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9.scope.
Mar  1 04:57:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v428: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:57:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d596d7bbeaa943503fca2d905befc2842a81e152dcaf724787313c15891f10d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d596d7bbeaa943503fca2d905befc2842a81e152dcaf724787313c15891f10d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d596d7bbeaa943503fca2d905befc2842a81e152dcaf724787313c15891f10d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d596d7bbeaa943503fca2d905befc2842a81e152dcaf724787313c15891f10d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:00 np0005634532 podman[219918]: 2026-03-01 09:57:00.682892699 +0000 UTC m=+0.156054656 container init 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 04:57:00 np0005634532 podman[219918]: 2026-03-01 09:57:00.690694278 +0000 UTC m=+0.163856235 container start 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:57:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:00 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:00 np0005634532 podman[219918]: 2026-03-01 09:57:00.695112165 +0000 UTC m=+0.168274132 container attach 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 04:57:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:00.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:00 np0005634532 naughty_banach[219943]: {
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:    "0": [
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:        {
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "devices": [
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "/dev/loop3"
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            ],
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "lv_name": "ceph_lv0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "lv_size": "21470642176",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "name": "ceph_lv0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "tags": {
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.cluster_name": "ceph",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.crush_device_class": "",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.encrypted": "0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.osd_id": "0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.type": "block",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.vdo": "0",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:                "ceph.with_tpm": "0"
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            },
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "type": "block",
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:            "vg_name": "ceph_vg0"
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:        }
Mar  1 04:57:00 np0005634532 naughty_banach[219943]:    ]
Mar  1 04:57:00 np0005634532 naughty_banach[219943]: }
Mar  1 04:57:01 np0005634532 systemd[1]: libpod-202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9.scope: Deactivated successfully.
Mar  1 04:57:01 np0005634532 podman[219918]: 2026-03-01 09:57:01.035716875 +0000 UTC m=+0.508878842 container died 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 04:57:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:01.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d596d7bbeaa943503fca2d905befc2842a81e152dcaf724787313c15891f10d6-merged.mount: Deactivated successfully.
Mar  1 04:57:01 np0005634532 podman[219918]: 2026-03-01 09:57:01.210986676 +0000 UTC m=+0.684148673 container remove 202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:57:01 np0005634532 systemd[1]: libpod-conmon-202bbdea48aad579555121094562219be1bb9273e4fcabf3d233f0fcc9e980b9.scope: Deactivated successfully.
Mar  1 04:57:01 np0005634532 python3.9[220096]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:01 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:01 np0005634532 python3.9[220322]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.766411646 +0000 UTC m=+0.037149182 container create 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:57:01 np0005634532 systemd[1]: Started libpod-conmon-120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335.scope.
Mar  1 04:57:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.748637445 +0000 UTC m=+0.019375001 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.846062556 +0000 UTC m=+0.116800092 container init 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.851745014 +0000 UTC m=+0.122482550 container start 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 04:57:01 np0005634532 systemd[1]: libpod-120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335.scope: Deactivated successfully.
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.8552886 +0000 UTC m=+0.126026146 container attach 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:57:01 np0005634532 friendly_elion[220385]: 167 167
Mar  1 04:57:01 np0005634532 conmon[220385]: conmon 120c2d137864e4734ffd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335.scope/container/memory.events
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.856772516 +0000 UTC m=+0.127510052 container died 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:57:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-349bae6293568bc157846b5269ec118ae1bd2acb95a10ee1aaa1c096cf966891-merged.mount: Deactivated successfully.
Mar  1 04:57:01 np0005634532 podman[220360]: 2026-03-01 09:57:01.888717121 +0000 UTC m=+0.159454657 container remove 120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_elion, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:57:01 np0005634532 systemd[1]: libpod-conmon-120c2d137864e4734ffdd32e78d16cc927a3467ccdc4c4ff07a7c397f0515335.scope: Deactivated successfully.
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.017042303 +0000 UTC m=+0.041455706 container create 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:57:02 np0005634532 systemd[1]: Started libpod-conmon-7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9.scope.
Mar  1 04:57:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:01.999220421 +0000 UTC m=+0.023633854 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:57:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443661cba1c3523c4ac3bb5209338fa5f535b8b7f57a7c4e816d530fc130693a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443661cba1c3523c4ac3bb5209338fa5f535b8b7f57a7c4e816d530fc130693a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443661cba1c3523c4ac3bb5209338fa5f535b8b7f57a7c4e816d530fc130693a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/443661cba1c3523c4ac3bb5209338fa5f535b8b7f57a7c4e816d530fc130693a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.111020772 +0000 UTC m=+0.135434205 container init 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.117822987 +0000 UTC m=+0.142236370 container start 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.122098421 +0000 UTC m=+0.146511914 container attach 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 04:57:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:57:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:57:02 np0005634532 python3.9[220585]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:57:02 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v429: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:02 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:02 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:02 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:02 np0005634532 naughty_engelbart[220440]: {}
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.804162562 +0000 UTC m=+0.828575955 container died 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:57:02 np0005634532 systemd[1]: libpod-7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9.scope: Deactivated successfully.
Mar  1 04:57:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-443661cba1c3523c4ac3bb5209338fa5f535b8b7f57a7c4e816d530fc130693a-merged.mount: Deactivated successfully.
Mar  1 04:57:02 np0005634532 podman[220424]: 2026-03-01 09:57:02.941267407 +0000 UTC m=+0.965680800 container remove 7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_engelbart, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:57:02 np0005634532 lvm[220705]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:57:02 np0005634532 lvm[220705]: VG ceph_vg0 finished
Mar  1 04:57:02 np0005634532 systemd[1]: Starting libvirt logging daemon socket...
Mar  1 04:57:02 np0005634532 systemd[1]: libpod-conmon-7023329fa1463e846ccf7863d37664b897722108a24db08865507b4a71b776f9.scope: Deactivated successfully.
Mar  1 04:57:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:02.948 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:02 np0005634532 systemd[1]: Listening on libvirt logging daemon socket.
Mar  1 04:57:02 np0005634532 systemd[1]: Starting libvirt logging daemon admin socket...
Mar  1 04:57:02 np0005634532 systemd[1]: Listening on libvirt logging daemon admin socket.
Mar  1 04:57:02 np0005634532 systemd[1]: Starting libvirt logging daemon...
Mar  1 04:57:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:57:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:57:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:57:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:57:03 np0005634532 systemd[1]: Started libvirt logging daemon.
Mar  1 04:57:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:03.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:03 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:03 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:03 np0005634532 python3.9[220887]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:57:03 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:57:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:57:04 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:04 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:04 np0005634532 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Mar  1 04:57:04 np0005634532 systemd[1]: Starting libvirt nodedev daemon socket...
Mar  1 04:57:04 np0005634532 systemd[1]: Listening on libvirt nodedev daemon socket.
Mar  1 04:57:04 np0005634532 systemd[1]: Starting libvirt nodedev daemon admin socket...
Mar  1 04:57:04 np0005634532 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Mar  1 04:57:04 np0005634532 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Mar  1 04:57:04 np0005634532 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Mar  1 04:57:04 np0005634532 systemd[1]: Starting libvirt nodedev daemon...
Mar  1 04:57:04 np0005634532 systemd[1]: Started libvirt nodedev daemon.
Mar  1 04:57:04 np0005634532 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Mar  1 04:57:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v430: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:57:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:04 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:04 np0005634532 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Mar  1 04:57:04 np0005634532 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Mar  1 04:57:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:04.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:05 np0005634532 python3.9[221120]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:57:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:05.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:05 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:05 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:05 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:05 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:05 np0005634532 systemd[1]: Starting libvirt proxy daemon admin socket...
Mar  1 04:57:05 np0005634532 systemd[1]: Starting libvirt proxy daemon read-only socket...
Mar  1 04:57:05 np0005634532 systemd[1]: Listening on libvirt proxy daemon admin socket.
Mar  1 04:57:05 np0005634532 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Mar  1 04:57:05 np0005634532 systemd[1]: Starting libvirt proxy daemon...
Mar  1 04:57:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:05 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:05 np0005634532 systemd[1]: Started libvirt proxy daemon.
Mar  1 04:57:05 np0005634532 setroubleshoot[220932]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bfe15a56-0211-4df6-81cc-ac7ce75591b4
Mar  1 04:57:05 np0005634532 setroubleshoot[220932]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Mar  1 04:57:05 np0005634532 setroubleshoot[220932]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l bfe15a56-0211-4df6-81cc-ac7ce75591b4
Mar  1 04:57:05 np0005634532 setroubleshoot[220932]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Mar  1 04:57:06 np0005634532 python3.9[221342]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:57:06 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:06 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:06 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:06 np0005634532 systemd[1]: Listening on libvirt locking daemon socket.
Mar  1 04:57:06 np0005634532 systemd[1]: Starting libvirt QEMU daemon socket...
Mar  1 04:57:06 np0005634532 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Mar  1 04:57:06 np0005634532 systemd[1]: Starting Virtual Machine and Container Registration Service...
Mar  1 04:57:06 np0005634532 systemd[1]: Listening on libvirt QEMU daemon socket.
Mar  1 04:57:06 np0005634532 systemd[1]: Starting libvirt QEMU daemon admin socket...
Mar  1 04:57:06 np0005634532 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Mar  1 04:57:06 np0005634532 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Mar  1 04:57:06 np0005634532 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Mar  1 04:57:06 np0005634532 systemd[1]: Started Virtual Machine and Container Registration Service.
Mar  1 04:57:06 np0005634532 systemd[1]: Starting libvirt QEMU daemon...
Mar  1 04:57:06 np0005634532 systemd[1]: Started libvirt QEMU daemon.
Mar  1 04:57:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v431: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:06 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:06.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:07.048Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:57:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:57:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:57:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:07.065 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:07 np0005634532 python3.9[221566]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:57:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:07 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:07 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:07 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:07 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:07 np0005634532 systemd[1]: Starting libvirt secret daemon socket...
Mar  1 04:57:07 np0005634532 systemd[1]: Listening on libvirt secret daemon socket.
Mar  1 04:57:07 np0005634532 systemd[1]: Starting libvirt secret daemon admin socket...
Mar  1 04:57:07 np0005634532 systemd[1]: Starting libvirt secret daemon read-only socket...
Mar  1 04:57:07 np0005634532 systemd[1]: Listening on libvirt secret daemon admin socket.
Mar  1 04:57:07 np0005634532 systemd[1]: Listening on libvirt secret daemon read-only socket.
Mar  1 04:57:07 np0005634532 systemd[1]: Starting libvirt secret daemon...
Mar  1 04:57:07 np0005634532 systemd[1]: Started libvirt secret daemon.
Mar  1 04:57:08 np0005634532 python3.9[221792]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v432: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:08 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:08.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:09.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:09 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:09 np0005634532 python3.9[221945]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:57:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:09 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:10 np0005634532 python3.9[222099]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v433: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:10 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:10.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:10 np0005634532 python3.9[222256]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:57:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:11.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:11 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:11 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:11 np0005634532 python3.9[222406]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:12 np0005634532 python3.9[222528]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359031.421695-3378-181272407184965/.source.xml follow=False _original_basename=secret.xml.j2 checksum=55fa72df6895964d4e65e48842d0879e7c05aa7e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v434: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:12 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003e50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:12.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:13.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:13 np0005634532 python3.9[222707]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 437b1e74-f995-5d64-af1d-257ce01d77ab#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:13 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:13 np0005634532 python3.9[222869]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v435: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 04:57:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:14 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:14.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:15.077 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac003ff0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:15 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:15 np0005634532 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Mar  1 04:57:15 np0005634532 systemd[1]: setroubleshootd.service: Deactivated successfully.
Mar  1 04:57:16 np0005634532 python3.9[223338]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v436: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:16 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc4003f60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:16.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:16 np0005634532 python3.9[223492]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:17.050Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:57:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:57:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:17.081 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:17 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac004010 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:17 np0005634532 python3.9[223616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359036.529034-3543-101411293673820/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:57:17
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', '.nfs', 'default.rgw.meta', 'images', 'default.rgw.log', 'vms', 'volumes', 'backups']
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:57:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:57:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:57:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:57:18 np0005634532 python3.9[223773]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v437: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:18 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:18.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:19 np0005634532 python3.9[223927]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:19.083 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1da8003dd0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:19 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:19 np0005634532 python3.9[224006]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:20 np0005634532 podman[224131]: 2026-03-01 09:57:20.10123597 +0000 UTC m=+0.070883640 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true)
Mar  1 04:57:20 np0005634532 python3.9[224175]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:20 np0005634532 python3.9[224266]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.wm0po98z recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v438: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:57:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:20 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:20.966 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:21.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:21 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v439: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Mar  1 04:57:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:22.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:23.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dc8002130 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:23 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:23 np0005634532 python3.9[224423]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:57:23.870 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:57:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:57:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:57:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:57:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:57:24 np0005634532 python3.9[224503]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:24 np0005634532 python3.9[224657]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v440: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 509 B/s rd, 0 op/s
Mar  1 04:57:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:24.970 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd40013a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:25.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:25 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:25 np0005634532 python3[224811]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Mar  1 04:57:26 np0005634532 podman[224936]: 2026-03-01 09:57:26.085788874 +0000 UTC m=+0.045181947 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Mar  1 04:57:26 np0005634532 python3.9[224984]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:26 np0005634532 python3.9[225065]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v441: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Mar  1 04:57:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:26.972 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:27.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:57:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:27] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:57:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:27] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:57:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1db4001090 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd4001eb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:27.123 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:27 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dd00098c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:27 np0005634532 python3.9[225218]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:28 np0005634532 python3.9[225345]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359046.9971921-3810-42849134277466/.source.nft follow=False _original_basename=jump-chain.j2 checksum=3ce353c89bce3b135a0ed688d4e338b2efb15185 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v442: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Mar  1 04:57:28 np0005634532 python3.9[225499]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:28.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:29.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:29 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:29 np0005634532 python3.9[225578]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:30 np0005634532 python3.9[225732]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:30 np0005634532 python3.9[225812]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v443: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Mar  1 04:57:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:30.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095731 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:57:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_17] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:31.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:31 np0005634532 python3.9[225965]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[179826]: 01/03/2026 09:57:31 : epoch 69a40cf5 : compute-0 : ganesha.nfsd-2[svc_20] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f1dac0040c0 fd 48 proxy ignored for local
Mar  1 04:57:31 np0005634532 kernel: ganesha.nfsd[224291]: segfault at 50 ip 00007f1e5a74c32e sp 00007f1e067fb210 error 4 in libntirpc.so.5.8[7f1e5a731000+2c000] likely on CPU 2 (core 0, socket 2)
Mar  1 04:57:31 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 04:57:31 np0005634532 systemd[1]: Started Process Core Dump (PID 225974/UID 0).
Mar  1 04:57:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:31 np0005634532 python3.9[226093]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1772359050.7109997-3927-71741416187782/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:32 np0005634532 systemd-coredump[225990]: Process 179857 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 63:#012#0  0x00007f1e5a74c32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 04:57:32 np0005634532 systemd[1]: systemd-coredump@7-225974-0.service: Deactivated successfully.
Mar  1 04:57:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:57:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:57:32 np0005634532 python3.9[226250]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:32 np0005634532 podman[226255]: 2026-03-01 09:57:32.554752875 +0000 UTC m=+0.030949652 container died c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 04:57:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay-78707e483ddb6dfd19f44f06dbb158ca080746a935f33bcdb868c16b2fcbcdcc-merged.mount: Deactivated successfully.
Mar  1 04:57:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v444: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 254 B/s rd, 0 op/s
Mar  1 04:57:32 np0005634532 podman[226255]: 2026-03-01 09:57:32.793548376 +0000 UTC m=+0.269745163 container remove c0100ab5e15258ec9a278d8139cc929e814efbf0e504cb472430246eb14d8f5d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:57:32 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 04:57:32 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 04:57:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:32.977 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:33 np0005634532 python3.9[226475]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:33.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:33 np0005634532 python3.9[226631]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:34 np0005634532 python3.9[226786]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v445: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:57:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:34.979 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:35.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:35 np0005634532 python3.9[226940]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:57:36 np0005634532 python3.9[227096]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:57:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v446: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:57:36 np0005634532 python3.9[227253]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:36.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:37.051Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:57:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:37] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:57:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:37] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:57:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:37.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095737 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:57:37 np0005634532 python3.9[227406]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:38 np0005634532 python3.9[227531]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359057.176522-4143-66043082460903/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:38 np0005634532 python3.9[227685]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v447: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:57:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:38.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:39.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:39 np0005634532 python3.9[227809]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359058.35851-4188-230549753868935/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:39 np0005634532 python3.9[227962]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:57:40 np0005634532 python3.9[228088]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359059.5482068-4233-119228917449146/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:57:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v448: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:57:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:40.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:41.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:41 np0005634532 python3.9[228241]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:57:41 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:41 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:41 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:41 np0005634532 systemd[1]: Reached target edpm_libvirt.target.
Mar  1 04:57:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:42 np0005634532 python3.9[228442]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Mar  1 04:57:42 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:42 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:42 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v449: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:57:42 np0005634532 systemd[1]: Reloading.
Mar  1 04:57:42 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:57:42 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:57:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:42.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:43.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:43 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 8.
Mar  1 04:57:43 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:57:43 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 04:57:43 np0005634532 podman[228603]: 2026-03-01 09:57:43.374480089 +0000 UTC m=+0.039711389 container create a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:57:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f52de3124181581ed1f03742eb2f3bf7990362f8f65484705dc440f097b55/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f52de3124181581ed1f03742eb2f3bf7990362f8f65484705dc440f097b55/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f52de3124181581ed1f03742eb2f3bf7990362f8f65484705dc440f097b55/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45f52de3124181581ed1f03742eb2f3bf7990362f8f65484705dc440f097b55/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:57:43 np0005634532 podman[228603]: 2026-03-01 09:57:43.426784897 +0000 UTC m=+0.092016267 container init a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:57:43 np0005634532 podman[228603]: 2026-03-01 09:57:43.43341942 +0000 UTC m=+0.098650730 container start a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:57:43 np0005634532 bash[228603]: a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a
Mar  1 04:57:43 np0005634532 podman[228603]: 2026-03-01 09:57:43.355207945 +0000 UTC m=+0.020439265 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 04:57:43 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 04:57:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:57:43 np0005634532 systemd[1]: session-53.scope: Deactivated successfully.
Mar  1 04:57:43 np0005634532 systemd[1]: session-53.scope: Consumed 2min 58.638s CPU time.
Mar  1 04:57:43 np0005634532 systemd-logind[832]: Session 53 logged out. Waiting for processes to exit.
Mar  1 04:57:43 np0005634532 systemd-logind[832]: Removed session 53.
Mar  1 04:57:43 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:57:43 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 04:57:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v450: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:57:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:44.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:45.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v451: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 767 B/s wr, 3 op/s
Mar  1 04:57:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:46.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:47.054Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:57:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:47.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:57:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:47] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:57:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:47] "GET /metrics HTTP/1.1" 200 48338 "" "Prometheus/2.51.0"
Mar  1 04:57:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:47.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:57:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:57:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:48 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:48 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:48 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:57:48 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:57:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v452: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1.2 KiB/s wr, 4 op/s
Mar  1 04:57:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:48.993 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:49.417 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:49 np0005634532 systemd-logind[832]: New session 54 of user zuul.
Mar  1 04:57:49 np0005634532 systemd[1]: Started Session 54 of User zuul.
Mar  1 04:57:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] rados_kv_traverse :CLIENT ID :EVENT :Failed to lst kv ret=-2
Mar  1 04:57:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] rados_cluster_read_clids :CLIENT ID :EVENT :Failed to traverse recovery db: -2
Mar  1 04:57:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:57:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:57:50 np0005634532 podman[228796]: 2026-03-01 09:57:50.277658664 +0000 UTC m=+0.058113902 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Mar  1 04:57:50 np0005634532 python3.9[228832]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:57:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v453: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Mar  1 04:57:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:50.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:51.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:51 np0005634532 python3.9[229002]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:57:51 np0005634532 network[229019]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:57:51 np0005634532 network[229020]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:57:51 np0005634532 network[229021]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:57:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v454: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1.1 KiB/s wr, 3 op/s
Mar  1 04:57:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:52.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095753 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 1ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:57:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:53.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v455: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.8 KiB/s rd, 1.7 KiB/s wr, 5 op/s
Mar  1 04:57:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:57:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:54.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:57:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:55.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:55 np0005634532 python3.9[229324]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] rados_cluster_end_grace :CLIENT ID :EVENT :Failed to remove rec-0000000000000019:nfs.cephfs.2: -2
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 04:57:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 04:57:56 np0005634532 podman[229393]: 2026-03-01 09:57:56.242897795 +0000 UTC m=+0.060557992 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 04:57:56 np0005634532 python3.9[229438]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:57:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:57:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v456: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:57:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:57:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:57.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:57:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:57:57.054Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:57:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:57:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:57:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:57:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:57.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v457: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.1 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:57:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:57:59.003 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:57:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095759 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:57:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780016c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:57:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:57:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:57:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:57:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:57:59.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v458: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:58:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:01.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:01.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:58:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:58:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v459: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:58:02 np0005634532 python3.9[229606]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:58:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:03.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840089f0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:03.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:03 np0005634532 python3.9[229809]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:58:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:58:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:58:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:58:04 np0005634532 python3.9[230048]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.591898526 +0000 UTC m=+0.056398030 container create 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.554850114 +0000 UTC m=+0.019349618 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:04 np0005634532 systemd[1]: Started libpod-conmon-74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c.scope.
Mar  1 04:58:04 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v460: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.76145185 +0000 UTC m=+0.225951344 container init 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.76672202 +0000 UTC m=+0.231221524 container start 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:58:04 np0005634532 bold_ritchie[230105]: 167 167
Mar  1 04:58:04 np0005634532 systemd[1]: libpod-74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c.scope: Deactivated successfully.
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.778642234 +0000 UTC m=+0.243141748 container attach 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.77970763 +0000 UTC m=+0.244207134 container died 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:58:04 np0005634532 systemd[1]: var-lib-containers-storage-overlay-991d356f6e579bf7d42d07f0816a94e2bb56b7a4012620c64d88602726d69a2d-merged.mount: Deactivated successfully.
Mar  1 04:58:04 np0005634532 podman[230089]: 2026-03-01 09:58:04.835590516 +0000 UTC m=+0.300090020 container remove 74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_ritchie, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:58:04 np0005634532 systemd[1]: libpod-conmon-74cd5b234e9820784bbf5fc619af9b31244c4b12eb7bbae3e276da129c44cd2c.scope: Deactivated successfully.
Mar  1 04:58:04 np0005634532 podman[230156]: 2026-03-01 09:58:04.987172978 +0000 UTC m=+0.073581353 container create 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:58:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:05.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:05 np0005634532 podman[230156]: 2026-03-01 09:58:04.935743942 +0000 UTC m=+0.022152347 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:05 np0005634532 systemd[1]: Started libpod-conmon-5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94.scope.
Mar  1 04:58:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:05 np0005634532 podman[230156]: 2026-03-01 09:58:05.10948662 +0000 UTC m=+0.195895005 container init 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:58:05 np0005634532 podman[230156]: 2026-03-01 09:58:05.114025241 +0000 UTC m=+0.200433626 container start 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 04:58:05 np0005634532 podman[230156]: 2026-03-01 09:58:05.118214134 +0000 UTC m=+0.204622539 container attach 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 04:58:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868001140 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:58:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:58:05 np0005634532 affectionate_ramanujan[230172]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:58:05 np0005634532 affectionate_ramanujan[230172]: --> All data devices are unavailable
Mar  1 04:58:05 np0005634532 systemd[1]: libpod-5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94.scope: Deactivated successfully.
Mar  1 04:58:05 np0005634532 podman[230156]: 2026-03-01 09:58:05.398382943 +0000 UTC m=+0.484791328 container died 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 04:58:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0cb805db4d51272de754db15b4676850ffe09aab264e67fa3ee171f6afe51bb1-merged.mount: Deactivated successfully.
Mar  1 04:58:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600016a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:06.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:06 np0005634532 podman[230156]: 2026-03-01 09:58:06.243017099 +0000 UTC m=+1.329425484 container remove 5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_ramanujan, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:58:06 np0005634532 systemd[1]: libpod-conmon-5d6145a8721ca2e8c3ace74b29b0c6b92669f4f8640b1c627e270efa739d4f94.scope: Deactivated successfully.
Mar  1 04:58:06 np0005634532 python3.9[230331]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:58:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.740672591 +0000 UTC m=+0.038396347 container create 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 04:58:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v461: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:58:06 np0005634532 systemd[1]: Started libpod-conmon-4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5.scope.
Mar  1 04:58:06 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.724082282 +0000 UTC m=+0.021806058 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.828597456 +0000 UTC m=+0.126321222 container init 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.835300261 +0000 UTC m=+0.133024037 container start 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 04:58:06 np0005634532 systemd[1]: libpod-4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5.scope: Deactivated successfully.
Mar  1 04:58:06 np0005634532 hopeful_goldstine[230506]: 167 167
Mar  1 04:58:06 np0005634532 conmon[230506]: conmon 4b3591cd3ce55201b304 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5.scope/container/memory.events
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.842106518 +0000 UTC m=+0.139830294 container attach 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.842482468 +0000 UTC m=+0.140206224 container died 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:58:06 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7dc222026d35b4872ea14fcced909ce8e0c69913e9db753ad7774f9860e24ae8-merged.mount: Deactivated successfully.
Mar  1 04:58:06 np0005634532 podman[230447]: 2026-03-01 09:58:06.907893588 +0000 UTC m=+0.205617354 container remove 4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hopeful_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 04:58:06 np0005634532 systemd[1]: libpod-conmon-4b3591cd3ce55201b30492e0905413745440061ed943ea00852a55105ec0bfe5.scope: Deactivated successfully.
Mar  1 04:58:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:07.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:07.055Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:58:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:07.057Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.082110518 +0000 UTC m=+0.044674781 container create a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:58:07 np0005634532 systemd[1]: Started libpod-conmon-a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf.scope.
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.064894934 +0000 UTC m=+0.027459077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af90ac04105df746f24c1a28e47917438d90d688fb9ae87141ff06c561698289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af90ac04105df746f24c1a28e47917438d90d688fb9ae87141ff06c561698289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af90ac04105df746f24c1a28e47917438d90d688fb9ae87141ff06c561698289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af90ac04105df746f24c1a28e47917438d90d688fb9ae87141ff06c561698289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:07 np0005634532 python3.9[230613]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.195361706 +0000 UTC m=+0.157925839 container init a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.202636185 +0000 UTC m=+0.165200318 container start a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.211025942 +0000 UTC m=+0.173590165 container attach a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:07 np0005634532 practical_pascal[230635]: {
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:    "0": [
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:        {
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "devices": [
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "/dev/loop3"
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            ],
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "lv_name": "ceph_lv0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "lv_size": "21470642176",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "name": "ceph_lv0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "tags": {
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.cluster_name": "ceph",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.crush_device_class": "",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.encrypted": "0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.osd_id": "0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.type": "block",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.vdo": "0",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:                "ceph.with_tpm": "0"
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            },
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "type": "block",
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:            "vg_name": "ceph_vg0"
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:        }
Mar  1 04:58:07 np0005634532 practical_pascal[230635]:    ]
Mar  1 04:58:07 np0005634532 practical_pascal[230635]: }
Mar  1 04:58:07 np0005634532 systemd[1]: libpod-a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf.scope: Deactivated successfully.
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.486694469 +0000 UTC m=+0.449258592 container died a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 04:58:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-af90ac04105df746f24c1a28e47917438d90d688fb9ae87141ff06c561698289-merged.mount: Deactivated successfully.
Mar  1 04:58:07 np0005634532 podman[230619]: 2026-03-01 09:58:07.580051288 +0000 UTC m=+0.542615411 container remove a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_pascal, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:58:07 np0005634532 systemd[1]: libpod-conmon-a025f1c12d79657cf8c6a7d769bf99b111562afdbc6a5abc4e05687ed5c9aacf.scope: Deactivated successfully.
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884009310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:07 np0005634532 python3.9[230778]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359086.7444186-240-233752209544046/.source.iscsi _original_basename=.xigzmb0l follow=False checksum=d4d74227979b09d641f96f48327070a05289ae22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095808 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:58:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:08.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.150515043 +0000 UTC m=+0.066623111 container create b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:58:08 np0005634532 systemd[1]: Started libpod-conmon-b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12.scope.
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.122380251 +0000 UTC m=+0.038488359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.251492039 +0000 UTC m=+0.167600177 container init b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.259537818 +0000 UTC m=+0.175645886 container start b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True)
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.263564347 +0000 UTC m=+0.179672425 container attach b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 04:58:08 np0005634532 confident_clarke[230965]: 167 167
Mar  1 04:58:08 np0005634532 systemd[1]: libpod-b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12.scope: Deactivated successfully.
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.267086033 +0000 UTC m=+0.183194121 container died b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:58:08 np0005634532 systemd[1]: var-lib-containers-storage-overlay-41c0784dff7b840f8e01f0ba105cabddbca891f61ea610efa968e36df42c5c15-merged.mount: Deactivated successfully.
Mar  1 04:58:08 np0005634532 podman[230919]: 2026-03-01 09:58:08.308872162 +0000 UTC m=+0.224980260 container remove b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:58:08 np0005634532 systemd[1]: libpod-conmon-b9fab960a1851b1ca091df3a216d48751ce7606876904b5879a6455735b51f12.scope: Deactivated successfully.
Mar  1 04:58:08 np0005634532 podman[231035]: 2026-03-01 09:58:08.476254633 +0000 UTC m=+0.054459661 container create 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 04:58:08 np0005634532 systemd[1]: Started libpod-conmon-3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0.scope.
Mar  1 04:58:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:58:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5581731de89efaaff3c7cf30858689d176b7e08c95803f2fa240a17116195d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5581731de89efaaff3c7cf30858689d176b7e08c95803f2fa240a17116195d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5581731de89efaaff3c7cf30858689d176b7e08c95803f2fa240a17116195d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a5581731de89efaaff3c7cf30858689d176b7e08c95803f2fa240a17116195d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:58:08 np0005634532 podman[231035]: 2026-03-01 09:58:08.452317264 +0000 UTC m=+0.030522372 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:58:08 np0005634532 podman[231035]: 2026-03-01 09:58:08.551740792 +0000 UTC m=+0.129945820 container init 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:58:08 np0005634532 podman[231035]: 2026-03-01 09:58:08.562413105 +0000 UTC m=+0.140618123 container start 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 04:58:08 np0005634532 podman[231035]: 2026-03-01 09:58:08.566872495 +0000 UTC m=+0.145077523 container attach 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:58:08 np0005634532 python3.9[231078]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v462: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:58:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:09.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:09 np0005634532 lvm[231271]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:58:09 np0005634532 lvm[231271]: VG ceph_vg0 finished
Mar  1 04:58:09 np0005634532 sharp_easley[231081]: {}
Mar  1 04:58:09 np0005634532 systemd[1]: libpod-3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0.scope: Deactivated successfully.
Mar  1 04:58:09 np0005634532 podman[231035]: 2026-03-01 09:58:09.2327711 +0000 UTC m=+0.810976128 container died 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:58:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a5581731de89efaaff3c7cf30858689d176b7e08c95803f2fa240a17116195d1-merged.mount: Deactivated successfully.
Mar  1 04:58:09 np0005634532 podman[231035]: 2026-03-01 09:58:09.278365703 +0000 UTC m=+0.856570721 container remove 3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 04:58:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:09 np0005634532 systemd[1]: libpod-conmon-3bbd654b3b241c5b81bbf1a3da8f9ee859a6cc54aa0400ebc201847ba836b2a0.scope: Deactivated successfully.
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:58:09 np0005634532 python3.9[231310]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:58:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:10.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:10 np0005634532 python3.9[231504]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:10 np0005634532 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Mar  1 04:58:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v463: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:11.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:11 np0005634532 python3.9[231661]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884009310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:11 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:11 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:11 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:11 np0005634532 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Mar  1 04:58:11 np0005634532 systemd[1]: Starting Open-iSCSI...
Mar  1 04:58:11 np0005634532 kernel: Loading iSCSI transport class v2.0-870.
Mar  1 04:58:11 np0005634532 systemd[1]: Started Open-iSCSI.
Mar  1 04:58:11 np0005634532 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Mar  1 04:58:11 np0005634532 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Mar  1 04:58:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:12.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v464: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:12 np0005634532 python3.9[231867]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:58:12 np0005634532 network[231884]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:58:12 np0005634532 network[231885]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:58:12 np0005634532 network[231886]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:58:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:13.016 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884009310 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:14.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v465: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:15.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680022e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a410 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:58:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:16.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:58:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v466: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:16 np0005634532 python3.9[232189]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:58:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:58:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:17.021 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:17.058Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:58:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:58:17
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'images', 'default.rgw.meta', '.nfs', 'cephfs.cephfs.data', 'default.rgw.log', 'volumes', 'backups', 'default.rgw.control', '.mgr', 'cephfs.cephfs.meta', 'vms']
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:58:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:58:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:58:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:58:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:18.129 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v467: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:58:18 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:58:18 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:58:18 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:18 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:18 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:19.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:19 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:58:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a410 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:19 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:58:19 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:58:19 np0005634532 systemd[1]: run-r2ae5765260ed476e855e1fb27980404a.service: Deactivated successfully.
Mar  1 04:58:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001820 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:58:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:58:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:20.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:20 np0005634532 podman[232398]: 2026-03-01 09:58:20.387379101 +0000 UTC m=+0.072418155 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:58:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v468: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:58:20 np0005634532 python3.9[232552]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Mar  1 04:58:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:21.025 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:21 np0005634532 python3.9[232705]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Mar  1 04:58:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:58:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:22.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:58:22 np0005634532 python3.9[232864]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:58:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v469: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:58:23 np0005634532 python3.9[232988]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359102.178965-504-72875112660878/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:23.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:58:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:58:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:58:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:58:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:58:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:58:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:58:23 np0005634532 python3.9[233141]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:24.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v470: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:58:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:25.030 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:25 np0005634532 python3.9[233296]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:58:25 np0005634532 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Mar  1 04:58:25 np0005634532 systemd[1]: Stopped Load Kernel Modules.
Mar  1 04:58:25 np0005634532 systemd[1]: Stopping Load Kernel Modules...
Mar  1 04:58:25 np0005634532 systemd[1]: Starting Load Kernel Modules...
Mar  1 04:58:25 np0005634532 systemd[1]: Finished Load Kernel Modules.
Mar  1 04:58:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:25 np0005634532 python3.9[233453]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:58:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:26.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:26 np0005634532 podman[233481]: 2026-03-01 09:58:26.408366895 +0000 UTC m=+0.091286379 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:58:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v471: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:58:26 np0005634532 python3.9[233630]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:58:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:27.033 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:58:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:27] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:58:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:27.059Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:58:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:27 np0005634532 python3.9[233783]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:58:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003880 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095828 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:58:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:28.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:28 np0005634532 python3.9[233908]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359107.1586795-657-271350040064068/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v472: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:58:28 np0005634532 python3.9[234062]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:58:28 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Mar  1 04:58:28 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:28.965736) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:58:28 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Mar  1 04:58:28 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359108965801, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 4200, "num_deletes": 502, "total_data_size": 8621093, "memory_usage": 8754272, "flush_reason": "Manual Compaction"}
Mar  1 04:58:28 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359109026520, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 8367052, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13133, "largest_seqno": 17332, "table_properties": {"data_size": 8349249, "index_size": 12057, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 4677, "raw_key_size": 36518, "raw_average_key_size": 19, "raw_value_size": 8312768, "raw_average_value_size": 4481, "num_data_blocks": 527, "num_entries": 1855, "num_filter_entries": 1855, "num_deletions": 502, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358663, "oldest_key_time": 1772358663, "file_creation_time": 1772359108, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 60838 microseconds, and 14437 cpu microseconds.
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.026578) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 8367052 bytes OK
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.026602) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.027777) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.027797) EVENT_LOG_v1 {"time_micros": 1772359109027791, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.027817) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 8604307, prev total WAL file size 8604307, number of live WAL files 2.
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.029404) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(8170KB)], [32(11MB)]
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359109029493, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 20731145, "oldest_snapshot_seqno": -1}
Mar  1 04:58:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:29.035 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 5031 keys, 15842968 bytes, temperature: kUnknown
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359109116778, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 15842968, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15804438, "index_size": 24852, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12613, "raw_key_size": 125837, "raw_average_key_size": 25, "raw_value_size": 15708430, "raw_average_value_size": 3122, "num_data_blocks": 1047, "num_entries": 5031, "num_filter_entries": 5031, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359109, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.117203) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 15842968 bytes
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.118497) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 237.2 rd, 181.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(8.0, 11.8 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(4.4) write-amplify(1.9) OK, records in: 6053, records dropped: 1022 output_compression: NoCompression
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.118518) EVENT_LOG_v1 {"time_micros": 1772359109118507, "job": 14, "event": "compaction_finished", "compaction_time_micros": 87416, "compaction_time_cpu_micros": 24608, "output_level": 6, "num_output_files": 1, "total_output_size": 15842968, "num_input_records": 6053, "num_output_records": 5031, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359109119461, "job": 14, "event": "table_file_deletion", "file_number": 34}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359109120744, "job": 14, "event": "table_file_deletion", "file_number": 32}
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.029298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.120862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.120871) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.120873) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.120875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:29.120878) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:29 np0005634532 python3.9[234217]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:30.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:30 np0005634532 python3.9[234372]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v473: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:58:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:31.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:31 np0005634532 python3.9[234525]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:31 np0005634532 python3.9[234678]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858000d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:32.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:32 np0005634532 python3.9[234833]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:58:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:58:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v474: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 04:58:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:33.041 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:33 np0005634532 python3.9[234986]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:33 np0005634532 python3.9[235166]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:34.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:34 np0005634532 python3.9[235321]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:58:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v475: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:58:35 np0005634532 python3.9[235476]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:58:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:35.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:35 np0005634532 python3.9[235630]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:35 np0005634532 systemd[1]: Listening on multipathd control socket.
Mar  1 04:58:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:36.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v476: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:58:36 np0005634532 python3.9[235789]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:36 np0005634532 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Mar  1 04:58:36 np0005634532 udevadm[235794]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Mar  1 04:58:36 np0005634532 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Mar  1 04:58:36 np0005634532 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Mar  1 04:58:36 np0005634532 multipathd[235797]: --------start up--------
Mar  1 04:58:36 np0005634532 multipathd[235797]: read /etc/multipath.conf
Mar  1 04:58:36 np0005634532 multipathd[235797]: path checkers start up
Mar  1 04:58:36 np0005634532 systemd[1]: Started Device-Mapper Multipath Device Controller.
Mar  1 04:58:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:58:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:37.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:58:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:37] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:37.060Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:58:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:37 np0005634532 python3.9[235959]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Mar  1 04:58:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:38.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:38 np0005634532 python3.9[236114]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Mar  1 04:58:38 np0005634532 kernel: Key type psk registered
Mar  1 04:58:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v477: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:58:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:39.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:39 np0005634532 python3.9[236278]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:58:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:39 np0005634532 python3.9[236402]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359118.8958898-1047-6396850725447/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001840 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:40.158 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:40 np0005634532 python3.9[236557]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v478: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:58:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:41.055 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:41 np0005634532 python3.9[236710]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:58:41 np0005634532 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Mar  1 04:58:41 np0005634532 systemd[1]: Stopped Load Kernel Modules.
Mar  1 04:58:41 np0005634532 systemd[1]: Stopping Load Kernel Modules...
Mar  1 04:58:41 np0005634532 systemd[1]: Starting Load Kernel Modules...
Mar  1 04:58:41 np0005634532 systemd[1]: Finished Load Kernel Modules.
Mar  1 04:58:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:42.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:42 np0005634532 python3.9[236869]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Mar  1 04:58:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v479: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:58:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:43.059 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001070 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:44.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v480: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:58:44 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:44 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:44 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:45.061 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:45 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:45 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:45 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:45 np0005634532 systemd-logind[832]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Mar  1 04:58:45 np0005634532 systemd-logind[832]: Watching system buttons on /dev/input/event0 (Power Button)
Mar  1 04:58:45 np0005634532 lvm[236998]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:58:45 np0005634532 lvm[236998]: VG ceph_vg0 finished
Mar  1 04:58:45 np0005634532 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Mar  1 04:58:45 np0005634532 systemd[1]: Starting man-db-cache-update.service...
Mar  1 04:58:45 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:45 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:45 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:46 np0005634532 systemd[1]: Queuing reload/restart jobs for marked units…
Mar  1 04:58:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:58:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:46.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:58:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v481: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:58:46 np0005634532 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Mar  1 04:58:46 np0005634532 systemd[1]: Finished man-db-cache-update.service.
Mar  1 04:58:46 np0005634532 systemd[1]: man-db-cache-update.service: Consumed 1.292s CPU time.
Mar  1 04:58:46 np0005634532 systemd[1]: run-rcdbd275396ca472fa680acb5a572f0f9.service: Deactivated successfully.
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:47] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 04:58:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:47.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:58:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:47.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8780023d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:58:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:58:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:58:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858002cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095848 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:58:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:48.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:48 np0005634532 python3.9[238375]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:58:48 np0005634532 systemd[1]: Stopping Open-iSCSI...
Mar  1 04:58:48 np0005634532 iscsid[231708]: iscsid shutting down.
Mar  1 04:58:48 np0005634532 systemd[1]: iscsid.service: Deactivated successfully.
Mar  1 04:58:48 np0005634532 systemd[1]: Stopped Open-iSCSI.
Mar  1 04:58:48 np0005634532 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Mar  1 04:58:48 np0005634532 systemd[1]: Starting Open-iSCSI...
Mar  1 04:58:48 np0005634532 systemd[1]: Started Open-iSCSI.
Mar  1 04:58:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v482: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:58:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:49.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:49 np0005634532 python3.9[238533]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 04:58:49 np0005634532 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Mar  1 04:58:49 np0005634532 multipathd[235797]: exit (signal)
Mar  1 04:58:49 np0005634532 multipathd[235797]: --------shut down-------
Mar  1 04:58:49 np0005634532 systemd[1]: multipathd.service: Deactivated successfully.
Mar  1 04:58:49 np0005634532 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Mar  1 04:58:49 np0005634532 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Mar  1 04:58:49 np0005634532 multipathd[238539]: --------start up--------
Mar  1 04:58:49 np0005634532 multipathd[238539]: read /etc/multipath.conf
Mar  1 04:58:49 np0005634532 multipathd[238539]: path checkers start up
Mar  1 04:58:49 np0005634532 systemd[1]: Started Device-Mapper Multipath Device Controller.
Mar  1 04:58:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:50.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:50 np0005634532 python3.9[238698]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Mar  1 04:58:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v483: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:51.069 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:51 np0005634532 podman[238827]: 2026-03-01 09:58:51.218801087 +0000 UTC m=+0.095660126 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Mar  1 04:58:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:51 np0005634532 python3.9[238873]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.732973) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131733027, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 422, "num_deletes": 250, "total_data_size": 391563, "memory_usage": 399328, "flush_reason": "Manual Compaction"}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131736488, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 306340, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 17333, "largest_seqno": 17754, "table_properties": {"data_size": 303981, "index_size": 459, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 837, "raw_key_size": 6081, "raw_average_key_size": 19, "raw_value_size": 299312, "raw_average_value_size": 956, "num_data_blocks": 21, "num_entries": 313, "num_filter_entries": 313, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359109, "oldest_key_time": 1772359109, "file_creation_time": 1772359131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 3554 microseconds, and 1425 cpu microseconds.
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.736525) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 306340 bytes OK
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.736544) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.737727) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.737744) EVENT_LOG_v1 {"time_micros": 1772359131737739, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.737759) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 388995, prev total WAL file size 388995, number of live WAL files 2.
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.738154) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323530' seq:72057594037927935, type:22 .. '6D67727374617400353031' seq:0, type:0; will stop at (end)
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(299KB)], [35(15MB)]
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131738193, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 16149308, "oldest_snapshot_seqno": -1}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4841 keys, 12170482 bytes, temperature: kUnknown
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131799393, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 12170482, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12137563, "index_size": 19689, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 12165, "raw_key_size": 122293, "raw_average_key_size": 25, "raw_value_size": 12049243, "raw_average_value_size": 2488, "num_data_blocks": 821, "num_entries": 4841, "num_filter_entries": 4841, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359131, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.799651) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 12170482 bytes
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.801264) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 263.6 rd, 198.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 15.1 +0.0 blob) out(11.6 +0.0 blob), read-write-amplify(92.4) write-amplify(39.7) OK, records in: 5344, records dropped: 503 output_compression: NoCompression
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.801287) EVENT_LOG_v1 {"time_micros": 1772359131801277, "job": 16, "event": "compaction_finished", "compaction_time_micros": 61268, "compaction_time_cpu_micros": 24329, "output_level": 6, "num_output_files": 1, "total_output_size": 12170482, "num_input_records": 5344, "num_output_records": 4841, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131801437, "job": 16, "event": "table_file_deletion", "file_number": 37}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359131803514, "job": 16, "event": "table_file_deletion", "file_number": 35}
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.738082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.803596) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.803602) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.803604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.803606) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-09:58:51.803608) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 04:58:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860002550 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:52.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:52 np0005634532 python3.9[239036]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:58:52 np0005634532 systemd[1]: Reloading.
Mar  1 04:58:52 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:58:52 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:58:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v484: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:53.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:53 np0005634532 python3.9[239233]: ansible-ansible.builtin.service_facts Invoked
Mar  1 04:58:53 np0005634532 network[239250]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Mar  1 04:58:53 np0005634532 network[239251]: 'network-scripts' will be removed from distribution in near future.
Mar  1 04:58:53 np0005634532 network[239252]: It is advised to switch to 'NetworkManager' instead for network management.
Mar  1 04:58:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003650 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:58:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:54.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:58:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v485: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:58:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:55.075 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:56.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:56 np0005634532 podman[239375]: 2026-03-01 09:58:56.536203007 +0000 UTC m=+0.069387089 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 04:58:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:58:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v486: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:58:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 04:58:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:58:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:58:57] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 04:58:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:58:57.061Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:58:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:57.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003cb0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:57 np0005634532 python3.9[239575]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:58:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:58:58.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:58:58 np0005634532 python3.9[239731]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v487: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:58:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:58:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:58:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:58:59.079 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:58:59 np0005634532 python3.9[239885]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:58:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 04:58:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:58:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:58:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:58:59 np0005634532 python3.9[240039]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:59:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:00.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_6] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 04:59:00 np0005634532 python3.9[240195]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:59:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v488: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:59:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:01.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:01 np0005634532 python3.9[240351]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:59:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:01 np0005634532 python3.9[240505]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:59:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:02.189 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:59:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:59:02 np0005634532 python3.9[240661]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 04:59:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v489: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 04:59:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:03.085 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 04:59:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:04 np0005634532 systemd[1]: virtnodedevd.service: Deactivated successfully.
Mar  1 04:59:04 np0005634532 python3.9[240818]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v490: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:59:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:05.087 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:05 np0005634532 python3.9[240971]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001110 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:05 np0005634532 systemd[1]: virtproxyd.service: Deactivated successfully.
Mar  1 04:59:05 np0005634532 python3.9[241125]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:06 np0005634532 python3.9[241279]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v491: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:59:06 np0005634532 python3.9[241433]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:07] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:59:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:07] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:59:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:07.063Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:59:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:07.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:07 np0005634532 python3.9[241586]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:07 np0005634532 python3.9[241741]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858003db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:08.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:08 np0005634532 python3.9[241896]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v492: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.6 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 04:59:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:09.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:09 np0005634532 python3.9[242049]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001db0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:10 np0005634532 python3.9[242245]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:10.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:10 np0005634532 podman[242452]: 2026-03-01 09:59:10.399058607 +0000 UTC m=+0.051041727 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 04:59:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095910 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 04:59:10 np0005634532 podman[242452]: 2026-03-01 09:59:10.485349772 +0000 UTC m=+0.137332872 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:59:10 np0005634532 python3.9[242496]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v493: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:59:10 np0005634532 podman[242718]: 2026-03-01 09:59:10.899410816 +0000 UTC m=+0.050311320 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:10 np0005634532 podman[242718]: 2026-03-01 09:59:10.914447236 +0000 UTC m=+0.065347720 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:11.093 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:11 np0005634532 python3.9[242804]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:11 np0005634532 podman[242843]: 2026-03-01 09:59:11.113854586 +0000 UTC m=+0.045561333 container exec a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:59:11 np0005634532 podman[242843]: 2026-03-01 09:59:11.124230101 +0000 UTC m=+0.055936818 container exec_died a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Mar  1 04:59:11 np0005634532 podman[242957]: 2026-03-01 09:59:11.279284159 +0000 UTC m=+0.041182055 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:59:11 np0005634532 podman[242957]: 2026-03-01 09:59:11.289318356 +0000 UTC m=+0.051216262 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 04:59:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:11 np0005634532 podman[243124]: 2026-03-01 09:59:11.471803399 +0000 UTC m=+0.058051860 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, name=keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, io.openshift.tags=Ceph keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, description=keepalived for Ceph, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., io.buildah.version=1.28.2, io.openshift.expose-services=, build-date=2023-02-22T09:23:20, version=2.2.4, com.redhat.component=keepalived-container)
Mar  1 04:59:11 np0005634532 podman[243124]: 2026-03-01 09:59:11.487469125 +0000 UTC m=+0.073717566 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, distribution-scope=public, io.buildah.version=1.28.2, io.openshift.expose-services=, io.openshift.tags=Ceph keepalived, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, version=2.2.4, description=keepalived for Ceph, architecture=x86_64, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.display-name=Keepalived on RHEL 9, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides keepalived on RHEL 9 for Ceph., vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=keepalived-container, build-date=2023-02-22T09:23:20, name=keepalived)
Mar  1 04:59:11 np0005634532 python3.9[243138]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:11 np0005634532 podman[243189]: 2026-03-01 09:59:11.677550135 +0000 UTC m=+0.042978109 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:11 np0005634532 podman[243189]: 2026-03-01 09:59:11.707461941 +0000 UTC m=+0.072889915 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:11 np0005634532 podman[243365]: 2026-03-01 09:59:11.855729082 +0000 UTC m=+0.038962510 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:59:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:12 np0005634532 podman[243365]: 2026-03-01 09:59:12.012356298 +0000 UTC m=+0.195589726 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 04:59:12 np0005634532 python3.9[243444]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864001f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:12.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:12 np0005634532 podman[243603]: 2026-03-01 09:59:12.303686601 +0000 UTC m=+0.041115453 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:12 np0005634532 podman[243603]: 2026-03-01 09:59:12.362440488 +0000 UTC m=+0.099869310 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:12 np0005634532 python3.9[243721]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v494: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 04:59:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 04:59:13 np0005634532 python3.9[243956]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:13.095 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.465936997 +0000 UTC m=+0.038328204 container create e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:59:13 np0005634532 systemd[1]: Started libpod-conmon-e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236.scope.
Mar  1 04:59:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.536804452 +0000 UTC m=+0.109195709 container init e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.542709638 +0000 UTC m=+0.115100845 container start e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.449407911 +0000 UTC m=+0.021799138 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:13 np0005634532 hungry_yalow[244089]: 167 167
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.546422529 +0000 UTC m=+0.118813786 container attach e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 04:59:13 np0005634532 systemd[1]: libpod-e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236.scope: Deactivated successfully.
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.547751832 +0000 UTC m=+0.120143059 container died e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:59:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c3345be714468a75710d013ec6b18e374a95aae3b70b6299b887c1f5006b8f8e-merged.mount: Deactivated successfully.
Mar  1 04:59:13 np0005634532 podman[244072]: 2026-03-01 09:59:13.586226699 +0000 UTC m=+0.158617926 container remove e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=hungry_yalow, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 04:59:13 np0005634532 systemd[1]: libpod-conmon-e29668fa7d1e0839165ab1b8a4357115a65049bf5eff1081301d1d0c4dc9c236.scope: Deactivated successfully.
Mar  1 04:59:13 np0005634532 podman[244113]: 2026-03-01 09:59:13.723290834 +0000 UTC m=+0.041307488 container create 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 04:59:13 np0005634532 systemd[1]: Started libpod-conmon-68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e.scope.
Mar  1 04:59:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:13 np0005634532 podman[244113]: 2026-03-01 09:59:13.787747211 +0000 UTC m=+0.105763885 container init 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 04:59:13 np0005634532 podman[244113]: 2026-03-01 09:59:13.793598035 +0000 UTC m=+0.111614689 container start 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:59:13 np0005634532 podman[244113]: 2026-03-01 09:59:13.796692711 +0000 UTC m=+0.114709365 container attach 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:59:13 np0005634532 podman[244113]: 2026-03-01 09:59:13.703674501 +0000 UTC m=+0.021691195 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:14 np0005634532 nostalgic_lederberg[244129]: --> passed data devices: 0 physical, 1 LVM
Mar  1 04:59:14 np0005634532 nostalgic_lederberg[244129]: --> All data devices are unavailable
Mar  1 04:59:14 np0005634532 systemd[1]: libpod-68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e.scope: Deactivated successfully.
Mar  1 04:59:14 np0005634532 podman[244113]: 2026-03-01 09:59:14.104023238 +0000 UTC m=+0.422039892 container died 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Mar  1 04:59:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5285f27ab212d7f8033b2f6fc860cad94916f5941e6f7f271383156694592d59-merged.mount: Deactivated successfully.
Mar  1 04:59:14 np0005634532 podman[244113]: 2026-03-01 09:59:14.145975641 +0000 UTC m=+0.463992295 container remove 68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nostalgic_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:59:14 np0005634532 systemd[1]: libpod-conmon-68f8091423092dadcf242a3ec6f9513fdb400421a2f8e6d25d77116cdbacdf6e.scope: Deactivated successfully.
Mar  1 04:59:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:14.205 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:14 np0005634532 python3.9[244314]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.679129647 +0000 UTC m=+0.039740689 container create d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 04:59:14 np0005634532 systemd[1]: Started libpod-conmon-d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac.scope.
Mar  1 04:59:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.747524621 +0000 UTC m=+0.108135673 container init d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.660072538 +0000 UTC m=+0.020683590 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.759282061 +0000 UTC m=+0.119893083 container start d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 04:59:14 np0005634532 tender_darwin[244497]: 167 167
Mar  1 04:59:14 np0005634532 systemd[1]: libpod-d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac.scope: Deactivated successfully.
Mar  1 04:59:14 np0005634532 conmon[244497]: conmon d7c2fb3f5e31ed3c5e71 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac.scope/container/memory.events
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.76576146 +0000 UTC m=+0.126372532 container attach d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.766681023 +0000 UTC m=+0.127292065 container died d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Mar  1 04:59:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v495: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 426 B/s wr, 2 op/s
Mar  1 04:59:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-71f54ad5a94f43b1bd9abd3c28e635ff71edaaace958b9557c15f456ec81cfa9-merged.mount: Deactivated successfully.
Mar  1 04:59:14 np0005634532 podman[244433]: 2026-03-01 09:59:14.802557196 +0000 UTC m=+0.163168218 container remove d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_darwin, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 04:59:14 np0005634532 systemd[1]: libpod-conmon-d7c2fb3f5e31ed3c5e71c2dddeb21b1dd566cffc416d32d10b6dd1102e293dac.scope: Deactivated successfully.
Mar  1 04:59:14 np0005634532 podman[244523]: 2026-03-01 09:59:14.933373377 +0000 UTC m=+0.045228725 container create 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:59:14 np0005634532 systemd[1]: Started libpod-conmon-7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e.scope.
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:14.914537593 +0000 UTC m=+0.026392951 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa5cfcbcdb698fa2bd55d7ed9df3f6b221de0e6c1f6e69a1b4bfb3085e7ac3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa5cfcbcdb698fa2bd55d7ed9df3f6b221de0e6c1f6e69a1b4bfb3085e7ac3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa5cfcbcdb698fa2bd55d7ed9df3f6b221de0e6c1f6e69a1b4bfb3085e7ac3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f7fa5cfcbcdb698fa2bd55d7ed9df3f6b221de0e6c1f6e69a1b4bfb3085e7ac3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:15.029127934 +0000 UTC m=+0.140983302 container init 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:15.040416752 +0000 UTC m=+0.152272110 container start 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:15.044087523 +0000 UTC m=+0.155942901 container attach 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 04:59:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:15.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:15 np0005634532 python3.9[244616]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]: {
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:    "0": [
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:        {
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "devices": [
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "/dev/loop3"
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            ],
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "lv_name": "ceph_lv0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "lv_size": "21470642176",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "name": "ceph_lv0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "tags": {
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.cephx_lockbox_secret": "",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.cluster_name": "ceph",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.crush_device_class": "",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.encrypted": "0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.osd_id": "0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.type": "block",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.vdo": "0",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:                "ceph.with_tpm": "0"
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            },
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "type": "block",
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:            "vg_name": "ceph_vg0"
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:        }
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]:    ]
Mar  1 04:59:15 np0005634532 intelligent_agnesi[244586]: }
Mar  1 04:59:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640020f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:15 np0005634532 systemd[1]: libpod-7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e.scope: Deactivated successfully.
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:15.324764734 +0000 UTC m=+0.436620112 container died 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:59:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f7fa5cfcbcdb698fa2bd55d7ed9df3f6b221de0e6c1f6e69a1b4bfb3085e7ac3-merged.mount: Deactivated successfully.
Mar  1 04:59:15 np0005634532 podman[244523]: 2026-03-01 09:59:15.377059391 +0000 UTC m=+0.488914749 container remove 7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 04:59:15 np0005634532 systemd[1]: libpod-conmon-7910b8e29de3910e56753455643b9bc052c369030d311b7986d92f909b44b06e.scope: Deactivated successfully.
Mar  1 04:59:15 np0005634532 systemd[1]: virtsecretd.service: Deactivated successfully.
Mar  1 04:59:15 np0005634532 systemd[1]: virtqemud.service: Deactivated successfully.
Mar  1 04:59:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.891896017 +0000 UTC m=+0.045707226 container create dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Mar  1 04:59:15 np0005634532 systemd[1]: Started libpod-conmon-dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943.scope.
Mar  1 04:59:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.87251362 +0000 UTC m=+0.026324879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.968976305 +0000 UTC m=+0.122787534 container init dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.975111636 +0000 UTC m=+0.128922885 container start dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.978569971 +0000 UTC m=+0.132381180 container attach dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid)
Mar  1 04:59:15 np0005634532 priceless_newton[244897]: 167 167
Mar  1 04:59:15 np0005634532 systemd[1]: libpod-dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943.scope: Deactivated successfully.
Mar  1 04:59:15 np0005634532 podman[244883]: 2026-03-01 09:59:15.980602311 +0000 UTC m=+0.134413550 container died dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 04:59:16 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c4910f1d91bb41e4551389f99f73c85a6f75ce8cb0a08f4a8bc9e9259ea58de1-merged.mount: Deactivated successfully.
Mar  1 04:59:16 np0005634532 podman[244883]: 2026-03-01 09:59:16.022203166 +0000 UTC m=+0.176014375 container remove dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 04:59:16 np0005634532 systemd[1]: libpod-conmon-dd7fe6920973aa3beb0f8eecc32e26a188a7412cb1d6d9caeb247184abca1943.scope: Deactivated successfully.
Mar  1 04:59:16 np0005634532 python3.9[244874]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 04:59:16 np0005634532 systemd[1]: Reloading.
Mar  1 04:59:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:16 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 04:59:16 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 04:59:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:16.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:16 np0005634532 podman[244925]: 2026-03-01 09:59:16.221952714 +0000 UTC m=+0.069698127 container create 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 04:59:16 np0005634532 podman[244925]: 2026-03-01 09:59:16.200049834 +0000 UTC m=+0.047795277 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 04:59:16 np0005634532 systemd[1]: Started libpod-conmon-6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875.scope.
Mar  1 04:59:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 04:59:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca7020b003568e1466c5617ba84b3f2f008208f0c3faceb2a7e057876b6e2d5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca7020b003568e1466c5617ba84b3f2f008208f0c3faceb2a7e057876b6e2d5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca7020b003568e1466c5617ba84b3f2f008208f0c3faceb2a7e057876b6e2d5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ca7020b003568e1466c5617ba84b3f2f008208f0c3faceb2a7e057876b6e2d5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 04:59:16 np0005634532 podman[244925]: 2026-03-01 09:59:16.456474268 +0000 UTC m=+0.304219761 container init 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Mar  1 04:59:16 np0005634532 podman[244925]: 2026-03-01 09:59:16.471731604 +0000 UTC m=+0.319477007 container start 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid)
Mar  1 04:59:16 np0005634532 podman[244925]: 2026-03-01 09:59:16.475440025 +0000 UTC m=+0.323185458 container attach 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 04:59:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v496: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:59:17 np0005634532 lvm[245207]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 04:59:17 np0005634532 lvm[245207]: VG ceph_vg0 finished
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:17] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:17] "GET /metrics HTTP/1.1" 200 48346 "" "Prometheus/2.51.0"
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:17.063Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:17.064Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:17.064Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:59:17 np0005634532 python3.9[245183]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:17.099 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:17 np0005634532 nifty_spence[244980]: {}
Mar  1 04:59:17 np0005634532 systemd[1]: libpod-6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875.scope: Deactivated successfully.
Mar  1 04:59:17 np0005634532 podman[244925]: 2026-03-01 09:59:17.14012292 +0000 UTC m=+0.987868323 container died 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 04:59:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8ca7020b003568e1466c5617ba84b3f2f008208f0c3faceb2a7e057876b6e2d5-merged.mount: Deactivated successfully.
Mar  1 04:59:17 np0005634532 podman[244925]: 2026-03-01 09:59:17.179014778 +0000 UTC m=+1.026760181 container remove 6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_spence, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 04:59:17 np0005634532 systemd[1]: libpod-conmon-6941fcee89f3398c907f189b96dc830a79c1b098d9e83ad65b0320175d60e875.scope: Deactivated successfully.
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_09:59:17
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', '.nfs', 'images']
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:59:17 np0005634532 python3.9[245402]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:59:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 04:59:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:18 np0005634532 python3.9[245557]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:18.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:18 np0005634532 python3.9[245712]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v497: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 04:59:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:19.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:19 np0005634532 python3.9[245866]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640037c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:19 np0005634532 python3.9[246020]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:20 np0005634532 python3.9[246175]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:20.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:20 np0005634532 python3.9[246330]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Mar  1 04:59:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v498: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:21.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:21 np0005634532 podman[246356]: 2026-03-01 09:59:21.381269403 +0000 UTC m=+0.064996621 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 04:59:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:22.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:22 np0005634532 python3.9[246512]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v499: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:23.105 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:23 np0005634532 python3.9[246665]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880022a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:59:23.871 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 04:59:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:59:23.872 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 04:59:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 09:59:23.872 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 04:59:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:24 np0005634532 python3.9[246819]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:24.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:24 np0005634532 python3.9[246973]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v500: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:59:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:25.107 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:25 np0005634532 python3.9[247126]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:25 np0005634532 python3.9[247279]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:26.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:26 np0005634532 python3.9[247433]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v501: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:26 np0005634532 podman[247560]: 2026-03-01 09:59:26.779116273 +0000 UTC m=+0.057947478 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 04:59:27 np0005634532 python3.9[247608]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:27] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Mar  1 04:59:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:27] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Mar  1 04:59:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:27.065Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:59:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:27.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:27 np0005634532 python3.9[247761]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:28.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v502: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:59:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:29.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860003f90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:30.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v503: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:31.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860004130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:32.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:59:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:59:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v504: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:33.117 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878001e80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400a5b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:34.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:34 np0005634532 python3.9[247951]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Mar  1 04:59:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v505: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:59:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:35.119 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:35 np0005634532 python3.9[248105]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Mar  1 04:59:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:36.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:36 np0005634532 python3.9[248266]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Mar  1 04:59:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v506: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:37] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:37] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:37.066Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:59:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:37.066Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:59:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:37.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:37 np0005634532 systemd-logind[832]: New session 55 of user zuul.
Mar  1 04:59:37 np0005634532 systemd[1]: Started Session 55 of User zuul.
Mar  1 04:59:37 np0005634532 systemd[1]: session-55.scope: Deactivated successfully.
Mar  1 04:59:37 np0005634532 systemd-logind[832]: Session 55 logged out. Waiting for processes to exit.
Mar  1 04:59:37 np0005634532 systemd-logind[832]: Removed session 55.
Mar  1 04:59:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:38.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:38 np0005634532 python3.9[248453]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:38 np0005634532 python3.9[248530]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v507: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:59:39 np0005634532 python3.9[248680]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:39.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878000ea0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:39 np0005634532 python3.9[248801]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772359178.7548194-2673-67581259746727/.source _original_basename=ssh-config follow=False checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:40.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:40 np0005634532 python3.9[248953]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v508: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:40 np0005634532 python3.9[249074]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772359179.9031463-2673-146661849436500/.source.py _original_basename=nova_statedir_ownership.py follow=False checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:41.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:41 np0005634532 python3.9[249224]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:41 np0005634532 python3.9[249345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772359181.0537086-2673-125276950538055/.source _original_basename=run-on-host follow=False checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:42.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:42 np0005634532 python3.9[249497]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v509: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:43.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:43 np0005634532 python3.9[249618]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1772359182.2749555-2835-228092536843256/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:43 np0005634532 python3.9[249771]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:44.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:44 np0005634532 python3.9[249926]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v510: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 04:59:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:45.132 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:45 np0005634532 python3.9[250079]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:59:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:46 np0005634532 python3.9[250233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 04:59:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:46.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:46 np0005634532 python3.9[250358]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1772359185.627311-2952-52524501815602/.source _original_basename=.vg3i_j2n follow=False checksum=d43abdc04c679b84db4073c1907524104d1fd038 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Mar  1 04:59:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v511: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:47] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:47] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:47.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:47.067Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:47.067Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 04:59:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:47.134 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 04:59:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 04:59:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 04:59:47 np0005634532 python3.9[250510]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 04:59:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:48.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:48 np0005634532 python3.9[250668]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v512: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:59:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:49.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:49 np0005634532 python3.9[250821]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 04:59:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:50 np0005634532 python3.9[250972]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute_init state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 04:59:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:50.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v513: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:51.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:51 np0005634532 podman[251245]: 2026-03-01 09:59:51.535047526 +0000 UTC m=+0.094967905 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.schema-version=1.0)
Mar  1 04:59:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/095952 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 04:59:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 04:59:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:52.265 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 04:59:52 np0005634532 python3.9[251426]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute_init config_pattern=*.json debug=False
Mar  1 04:59:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v514: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 04:59:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:53.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:53 np0005634532 python3.9[251579]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Mar  1 04:59:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:54 np0005634532 python3[251759]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute_init config_id=nova_compute_init config_overrides={} config_patterns=*.json containers=['nova_compute_init'] log_base_path=/var/log/containers/stdouts debug=False
Mar  1 04:59:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v515: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 04:59:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 04:59:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:55.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 04:59:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002d00 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 04:59:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v516: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 04:59:56 np0005634532 podman[251814]: 2026-03-01 09:59:56.894894571 +0000 UTC m=+0.048419832 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Mar  1 04:59:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:09:59:57] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 04:59:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:57.068Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 04:59:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T09:59:57.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 04:59:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:57.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:09:59:58.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v517: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 682 B/s rd, 85 B/s wr, 0 op/s
Mar  1 04:59:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 04:59:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 04:59:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:09:59:59.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 04:59:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 04:59:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 09:59:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : overall HEALTH_OK
Mar  1 05:00:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000049s ======
Mar  1 05:00:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Mar  1 05:00:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v518: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 05:00:01 np0005634532 ceph-mon[75825]: overall HEALTH_OK
Mar  1 05:00:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:01.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:00:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:02.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:00:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:00:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v519: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 85 B/s wr, 0 op/s
Mar  1 05:00:02 np0005634532 podman[251771]: 2026-03-01 10:00:02.819787693 +0000 UTC m=+8.019559930 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355
Mar  1 05:00:02 np0005634532 podman[251881]: 2026-03-01 10:00:02.986769497 +0000 UTC m=+0.065817428 container create 431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute_init, org.label-schema.schema-version=1.0, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=nova_compute_init, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute_init, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:00:02 np0005634532 podman[251881]: 2026-03-01 10:00:02.944542489 +0000 UTC m=+0.023590400 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355
Mar  1 05:00:02 np0005634532 python3[251759]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --env EDPM_CONFIG_HASH=08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3 --label config_id=nova_compute_init --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Mar  1 05:00:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:03.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8600046f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:04 np0005634532 python3.9[252074]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:04.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:00:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:00:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v520: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 05:00:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:05.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:05 np0005634532 python3.9[252227]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Mar  1 05:00:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:06.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:06 np0005634532 python3.9[252384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 05:00:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v521: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 597 B/s wr, 1 op/s
Mar  1 05:00:06 np0005634532 python3.9[252510]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359205.9243119-3330-188678573044895/.source.yaml _original_basename=.d0x_9rib follow=False checksum=d9b8a2181e7484b2a723b34ad710dbcdd58350f7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:00:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:00:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:00:07.069Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:00:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:07.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:00:07 np0005634532 python3.9[252663]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:08.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v522: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 05:00:09 np0005634532 python3.9[252818]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Mar  1 05:00:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:09.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:09 np0005634532 python3.9[252971]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 05:00:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:10 np0005634532 python3.9[253096]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/nova_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359209.2002435-3429-221860156256695/.source.json _original_basename=.2la4lh41 follow=False checksum=0018389a48392615f4a8869cad43008a907328ff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:00:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:10.294 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:00:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v523: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 05:00:10 np0005634532 python3.9[253247]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/nova_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:11.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:12.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v524: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 05:00:12 np0005634532 python3.9[253673]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/nova_compute config_pattern=*.json debug=False
Mar  1 05:00:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:13.162 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:13 np0005634532 python3.9[253826]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Mar  1 05:00:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100014 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:00:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888001ac0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:14.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:14 np0005634532 python3[254006]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/nova_compute config_id=nova_compute config_overrides={} config_patterns=*.json containers=['nova_compute'] log_base_path=/var/log/containers/stdouts debug=False
Mar  1 05:00:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v525: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 938 B/s wr, 2 op/s
Mar  1 05:00:14 np0005634532 podman[254045]: 2026-03-01 10:00:14.994536975 +0000 UTC m=+0.063165013 container create f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:00:14 np0005634532 podman[254045]: 2026-03-01 10:00:14.952086702 +0000 UTC m=+0.020714770 image pull 7e637240710437807d86f704ec92f4417e40d6b1f76088848cab504c91655fe5 quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355
Mar  1 05:00:15 np0005634532 python3[254006]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3 --label config_id=nova_compute --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355 kolla_start
Mar  1 05:00:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:15.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:15 np0005634532 python3.9[254236]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:16.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:16 np0005634532 python3.9[254393]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v526: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 05:00:16 np0005634532 python3.9[254470]: ansible-stat Invoked with path=/etc/systemd/system/edpm_nova_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:00:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:00:17.071Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:00:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:17.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:17 np0005634532 python3.9[254624]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1772359216.9404857-3663-13082129893611/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:00:17
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'default.rgw.control', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', '.rgw.root', '.nfs', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.log', 'images']
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:00:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:00:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:00:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:00:17 np0005634532 python3.9[254751]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Mar  1 05:00:17 np0005634532 systemd[1]: Reloading.
Mar  1 05:00:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888003270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:17 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 05:00:17 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 05:00:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:18.306 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:00:18 np0005634532 python3.9[254905]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Mar  1 05:00:18 np0005634532 systemd[1]: Reloading.
Mar  1 05:00:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v527: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 05:00:18 np0005634532 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Mar  1 05:00:18 np0005634532 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:00:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:00:19 np0005634532 systemd[1]: Starting nova_compute container...
Mar  1 05:00:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:19.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 podman[254977]: 2026-03-01 10:00:19.217633717 +0000 UTC m=+0.120470092 container init f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_managed=true, io.buildah.version=1.43.0)
Mar  1 05:00:19 np0005634532 podman[254977]: 2026-03-01 10:00:19.223482931 +0000 UTC m=+0.126319306 container start f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=nova_compute, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.vendor=CentOS, config_id=nova_compute, io.buildah.version=1.43.0)
Mar  1 05:00:19 np0005634532 podman[254977]: nova_compute
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + sudo -E kolla_set_configs
Mar  1 05:00:19 np0005634532 systemd[1]: Started nova_compute container.
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Validating config file
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying service configuration files
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Deleting /etc/nova/nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Deleting /etc/ceph
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Creating directory /etc/ceph
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/ceph
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Writing out command to execute
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:19 np0005634532 nova_compute[255017]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Mar  1 05:00:19 np0005634532 nova_compute[255017]: ++ cat /run_command
Mar  1 05:00:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + CMD=nova-compute
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + ARGS=
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + sudo kolla_copy_cacerts
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + [[ ! -n '' ]]
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + . kolla_extend_start
Mar  1 05:00:19 np0005634532 nova_compute[255017]: Running command: 'nova-compute'
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + echo 'Running command: '\''nova-compute'\'''
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + umask 0022
Mar  1 05:00:19 np0005634532 nova_compute[255017]: + exec nova-compute
Mar  1 05:00:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:00:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.487527541 +0000 UTC m=+0.036321004 container create 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:00:19 np0005634532 systemd[1]: Started libpod-conmon-20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8.scope.
Mar  1 05:00:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.544623074 +0000 UTC m=+0.093416557 container init 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.551711329 +0000 UTC m=+0.100504792 container start 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.556674191 +0000 UTC m=+0.105467674 container attach 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:00:19 np0005634532 systemd[1]: libpod-20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8.scope: Deactivated successfully.
Mar  1 05:00:19 np0005634532 stoic_brahmagupta[255110]: 167 167
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.562493814 +0000 UTC m=+0.111287547 container died 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:00:19 np0005634532 conmon[255110]: conmon 20710aa7211778dbb222 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8.scope/container/memory.events
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.468652587 +0000 UTC m=+0.017446070 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fd2223639664911065367fe40913adc3f0676654ae95aedb0b2c9732f2f630db-merged.mount: Deactivated successfully.
Mar  1 05:00:19 np0005634532 podman[255093]: 2026-03-01 10:00:19.634833762 +0000 UTC m=+0.183627225 container remove 20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Mar  1 05:00:19 np0005634532 systemd[1]: libpod-conmon-20710aa7211778dbb2220a9617953e55a422bf725e8fa3ae0c04295b192c27b8.scope: Deactivated successfully.
Mar  1 05:00:19 np0005634532 podman[255132]: 2026-03-01 10:00:19.743287598 +0000 UTC m=+0.035826682 container create 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Mar  1 05:00:19 np0005634532 systemd[1]: Started libpod-conmon-3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135.scope.
Mar  1 05:00:19 np0005634532 podman[255132]: 2026-03-01 10:00:19.726805142 +0000 UTC m=+0.019344226 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:19 np0005634532 podman[255132]: 2026-03-01 10:00:19.846418792 +0000 UTC m=+0.138957886 container init 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:00:19 np0005634532 podman[255132]: 2026-03-01 10:00:19.856180942 +0000 UTC m=+0.148720026 container start 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:00:19 np0005634532 podman[255132]: 2026-03-01 10:00:19.861463382 +0000 UTC m=+0.154002466 container attach 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:00:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:20 np0005634532 brave_benz[255149]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:00:20 np0005634532 brave_benz[255149]: --> All data devices are unavailable
Mar  1 05:00:20 np0005634532 systemd[1]: libpod-3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135.scope: Deactivated successfully.
Mar  1 05:00:20 np0005634532 podman[255132]: 2026-03-01 10:00:20.237403083 +0000 UTC m=+0.529942177 container died 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 05:00:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2d878bbc6bbc332d8fa38b3b4c3cd5df0324c3996c9beb8d85809856fffa60a0-merged.mount: Deactivated successfully.
Mar  1 05:00:20 np0005634532 podman[255132]: 2026-03-01 10:00:20.275407487 +0000 UTC m=+0.567946591 container remove 3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_benz, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:00:20 np0005634532 systemd[1]: libpod-conmon-3808bccc079de5117b13a2a8333bf27e01f3ea11da7ba2f776eb43ea78d34135.scope: Deactivated successfully.
Mar  1 05:00:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:20.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888003270 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:20 np0005634532 python3.9[255349]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.748067715 +0000 UTC m=+0.037642136 container create e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:00:20 np0005634532 systemd[1]: Started libpod-conmon-e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0.scope.
Mar  1 05:00:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v528: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.810151271 +0000 UTC m=+0.099725712 container init e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.815632426 +0000 UTC m=+0.105206847 container start e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.818295971 +0000 UTC m=+0.107870382 container attach e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:00:20 np0005634532 stoic_cray[255438]: 167 167
Mar  1 05:00:20 np0005634532 systemd[1]: libpod-e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0.scope: Deactivated successfully.
Mar  1 05:00:20 np0005634532 conmon[255438]: conmon e8aed88f3350965995ba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0.scope/container/memory.events
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.820544066 +0000 UTC m=+0.110118487 container died e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.733251891 +0000 UTC m=+0.022826342 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-07fba6e35e741c1d12b6dee7477e3778cbee69295148205c571b68ddc2270c32-merged.mount: Deactivated successfully.
Mar  1 05:00:20 np0005634532 podman[255421]: 2026-03-01 10:00:20.85161408 +0000 UTC m=+0.141188501 container remove e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_cray, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:00:20 np0005634532 systemd[1]: libpod-conmon-e8aed88f3350965995ba220dcf1fd2b7d9fde75465703c827cb1016063e09fa0.scope: Deactivated successfully.
Mar  1 05:00:20 np0005634532 podman[255462]: 2026-03-01 10:00:20.970500292 +0000 UTC m=+0.035720799 container create 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Mar  1 05:00:21 np0005634532 systemd[1]: Started libpod-conmon-2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d.scope.
Mar  1 05:00:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2190c089f6d86b39ad5ce2cedcbf074e5078d7ff36c8e4447a43118d5a021738/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2190c089f6d86b39ad5ce2cedcbf074e5078d7ff36c8e4447a43118d5a021738/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2190c089f6d86b39ad5ce2cedcbf074e5078d7ff36c8e4447a43118d5a021738/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2190c089f6d86b39ad5ce2cedcbf074e5078d7ff36c8e4447a43118d5a021738/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:21 np0005634532 podman[255462]: 2026-03-01 10:00:21.049228108 +0000 UTC m=+0.114448615 container init 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:00:21 np0005634532 podman[255462]: 2026-03-01 10:00:20.957331249 +0000 UTC m=+0.022551746 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:21 np0005634532 podman[255462]: 2026-03-01 10:00:21.053956554 +0000 UTC m=+0.119177041 container start 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:00:21 np0005634532 podman[255462]: 2026-03-01 10:00:21.057226074 +0000 UTC m=+0.122446571 container attach 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 05:00:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:21.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]: {
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:    "0": [
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:        {
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "devices": [
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "/dev/loop3"
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            ],
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "lv_name": "ceph_lv0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "lv_size": "21470642176",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "name": "ceph_lv0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "tags": {
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.cluster_name": "ceph",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.crush_device_class": "",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.encrypted": "0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.osd_id": "0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.type": "block",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.vdo": "0",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:                "ceph.with_tpm": "0"
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            },
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "type": "block",
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:            "vg_name": "ceph_vg0"
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:        }
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]:    ]
Mar  1 05:00:21 np0005634532 dazzling_shannon[255479]: }
Mar  1 05:00:21 np0005634532 systemd[1]: libpod-2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d.scope: Deactivated successfully.
Mar  1 05:00:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:21 np0005634532 podman[255489]: 2026-03-01 10:00:21.373296383 +0000 UTC m=+0.028582393 container died 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Mar  1 05:00:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2190c089f6d86b39ad5ce2cedcbf074e5078d7ff36c8e4447a43118d5a021738-merged.mount: Deactivated successfully.
Mar  1 05:00:21 np0005634532 podman[255489]: 2026-03-01 10:00:21.405376172 +0000 UTC m=+0.060662162 container remove 2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dazzling_shannon, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 05:00:21 np0005634532 systemd[1]: libpod-conmon-2ace0b8f3ee80f89a81af0a42bb379e35a9f748f8f5c98eb92b15957146a2a6d.scope: Deactivated successfully.
Mar  1 05:00:21 np0005634532 podman[255678]: 2026-03-01 10:00:21.645134465 +0000 UTC m=+0.063172304 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Mar  1 05:00:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:21 np0005634532 python3.9[255682]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.789 255021 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.790 255021 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.790 255021 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.790 255021 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Mar  1 05:00:21 np0005634532 podman[255783]: 2026-03-01 10:00:21.921929569 +0000 UTC m=+0.037314559 container create 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Mar  1 05:00:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:21 np0005634532 systemd[1]: Started libpod-conmon-9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f.scope.
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.957 255021 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.975 255021 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.017s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:00:21 np0005634532 nova_compute[255017]: 2026-03-01 10:00:21.975 255021 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Mar  1 05:00:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:21 np0005634532 podman[255783]: 2026-03-01 10:00:21.992485683 +0000 UTC m=+0.107870693 container init 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:00:21 np0005634532 podman[255783]: 2026-03-01 10:00:21.997905066 +0000 UTC m=+0.113290056 container start 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:00:22 np0005634532 podman[255783]: 2026-03-01 10:00:22.001041493 +0000 UTC m=+0.116426493 container attach 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:00:22 np0005634532 infallible_boyd[255838]: 167 167
Mar  1 05:00:22 np0005634532 systemd[1]: libpod-9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f.scope: Deactivated successfully.
Mar  1 05:00:22 np0005634532 podman[255783]: 2026-03-01 10:00:22.003839642 +0000 UTC m=+0.119224632 container died 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 05:00:22 np0005634532 podman[255783]: 2026-03-01 10:00:21.908835627 +0000 UTC m=+0.024220637 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2257218d89b2d52c24b76315bc8de6d363c324e633a36e409c0fa09920cb20a2-merged.mount: Deactivated successfully.
Mar  1 05:00:22 np0005634532 podman[255783]: 2026-03-01 10:00:22.04728837 +0000 UTC m=+0.162673360 container remove 9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_boyd, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Mar  1 05:00:22 np0005634532 systemd[1]: libpod-conmon-9684a95aab2d925409983c318e2f54a7f7ff21f62e9a0550be9ffd33a8f42e1f.scope: Deactivated successfully.
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.172285082 +0000 UTC m=+0.044566776 container create 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid)
Mar  1 05:00:22 np0005634532 systemd[1]: Started libpod-conmon-5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272.scope.
Mar  1 05:00:22 np0005634532 python3.9[255912]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1772359221.3592176-3798-119992727316357/.source.yaml _original_basename=.f742t3qo follow=False checksum=b25ff0455b142e1a326a449c4c083c9fe126a52d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Mar  1 05:00:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80cbe17658b1d9f7b640c07b3487e81188f47f24632e1cb91493d06cae04eeb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80cbe17658b1d9f7b640c07b3487e81188f47f24632e1cb91493d06cae04eeb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80cbe17658b1d9f7b640c07b3487e81188f47f24632e1cb91493d06cae04eeb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d80cbe17658b1d9f7b640c07b3487e81188f47f24632e1cb91493d06cae04eeb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.156251238 +0000 UTC m=+0.028532962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.257560268 +0000 UTC m=+0.129841972 container init 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.262540831 +0000 UTC m=+0.134822525 container start 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.265722119 +0000 UTC m=+0.138003833 container attach 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 05:00:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:22.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.531 255021 INFO nova.virt.driver [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.676 255021 INFO nova.compute.provider_config [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.689 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.689 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.689 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.690 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.691 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.692 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.693 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.694 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.695 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.696 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.696 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.696 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.696 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.696 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.697 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.698 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.699 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.700 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.701 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.702 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.702 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.702 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.702 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.702 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.703 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.704 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.705 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.706 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.707 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.708 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.709 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.710 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.711 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.712 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.713 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.714 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.715 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.716 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.717 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.718 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.718 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.718 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.718 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.718 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.719 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.719 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.719 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.719 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.719 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.720 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.721 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.722 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.723 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.724 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.725 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.726 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.727 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.728 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.729 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.730 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.731 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.732 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.733 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.734 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.735 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.736 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.737 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.737 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.737 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.737 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.737 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.738 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.739 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.740 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.741 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.741 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.741 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.741 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.742 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.742 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.742 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.742 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.742 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.743 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.743 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.743 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.743 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.743 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.744 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.745 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.746 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.747 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.748 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.749 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.750 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.751 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.752 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.753 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.754 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.755 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.756 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.757 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.758 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.759 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.760 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.761 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.762 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.763 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.764 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.764 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.764 255021 WARNING oslo_config.cfg [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Mar  1 05:00:22 np0005634532 nova_compute[255017]: live_migration_uri is deprecated for removal in favor of two other options that
Mar  1 05:00:22 np0005634532 nova_compute[255017]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Mar  1 05:00:22 np0005634532 nova_compute[255017]: and ``live_migration_inbound_addr`` respectively.
Mar  1 05:00:22 np0005634532 nova_compute[255017]: ).  Its value may be silently ignored in the future.#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.764 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.764 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.765 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.766 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rbd_secret_uuid        = 437b1e74-f995-5d64-af1d-257ce01d77ab log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.767 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.768 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.769 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.770 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.771 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.772 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.773 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.774 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.775 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.776 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.777 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.778 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.779 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.780 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.781 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.782 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.783 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.784 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.785 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.785 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.785 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.785 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.785 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.786 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.787 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.788 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.789 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.790 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.792 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.792 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.793 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.793 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.794 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.794 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.794 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.795 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.795 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.795 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.796 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.796 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.796 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.797 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.797 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.797 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.798 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.798 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.798 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.799 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.799 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.800 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v529: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 B/s wr, 0 op/s
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.800 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.801 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.801 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.802 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.802 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.802 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.803 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.803 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.803 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.803 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.804 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.804 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.804 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.805 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.805 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.805 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.806 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.806 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.806 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.807 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.807 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.808 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.808 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.808 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.809 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.809 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.809 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.810 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.810 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.810 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.810 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.811 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.811 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.811 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.812 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.812 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.812 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.813 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.813 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.813 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.814 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.814 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.814 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.814 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.815 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.815 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.815 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.816 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.816 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.817 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.817 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.817 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.818 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.818 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.818 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.819 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.819 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.819 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.820 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.820 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.820 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.820 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.821 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.821 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.821 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.822 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.822 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.822 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.823 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.823 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.823 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.823 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.824 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.824 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.825 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.825 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.825 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.826 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.826 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.826 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.827 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.827 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.827 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.828 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 lvm[256034]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.828 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 lvm[256034]: VG ceph_vg0 finished
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.828 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.829 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.829 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.829 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.829 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.830 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.830 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.831 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.831 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.831 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.832 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.832 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.832 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.833 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.833 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.833 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.834 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.834 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.834 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.835 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.835 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.835 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.835 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.836 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.836 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.836 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.837 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.837 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.837 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.838 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.838 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.838 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.839 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.839 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.839 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.840 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.840 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.840 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.840 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.840 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.841 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.841 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.841 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.841 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.841 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.842 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.842 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.842 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.842 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.842 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.843 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.843 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.843 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.843 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.844 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.844 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.844 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.844 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.844 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.845 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.845 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.845 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.845 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.845 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.846 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.846 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.846 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.846 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.846 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.847 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.847 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.847 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.847 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.848 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.848 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.848 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.848 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.848 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.849 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.849 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.849 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.849 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.849 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.850 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.850 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.850 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.850 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.851 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.852 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.852 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.852 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.852 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.852 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.853 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.853 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.853 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.853 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.853 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.854 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.854 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.854 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.854 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.854 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.855 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.855 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.855 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.855 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.856 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.856 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.856 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.856 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.856 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.857 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.857 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.857 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.857 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.857 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.858 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.858 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.858 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.858 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.858 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.859 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.860 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.861 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.861 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.861 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.863 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.863 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.863 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.863 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.863 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.864 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.864 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.864 255021 DEBUG oslo_service.service [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.865 255021 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.878 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.879 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.879 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.879 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Mar  1 05:00:22 np0005634532 gallant_mirzakhani[255936]: {}
Mar  1 05:00:22 np0005634532 systemd[1]: Starting libvirt QEMU daemon...
Mar  1 05:00:22 np0005634532 systemd[1]: libpod-5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272.scope: Deactivated successfully.
Mar  1 05:00:22 np0005634532 conmon[255936]: conmon 5dcd68085a89ab26cfe2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272.scope/container/memory.events
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.915368816 +0000 UTC m=+0.787650510 container died 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:00:22 np0005634532 systemd[1]: Started libvirt QEMU daemon.
Mar  1 05:00:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d80cbe17658b1d9f7b640c07b3487e81188f47f24632e1cb91493d06cae04eeb-merged.mount: Deactivated successfully.
Mar  1 05:00:22 np0005634532 podman[255918]: 2026-03-01 10:00:22.957362689 +0000 UTC m=+0.829644393 container remove 5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mirzakhani, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.957 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f58cf5e2040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.964 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f58cf5e2040> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.965 255021 INFO nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Connection event '1' reason 'None'#033[00m
Mar  1 05:00:22 np0005634532 systemd[1]: libpod-conmon-5dcd68085a89ab26cfe2712187e43796638dba1b3ecabc48e9ddc30234dcb272.scope: Deactivated successfully.
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.983 255021 WARNING nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Mar  1 05:00:22 np0005634532 nova_compute[255017]: 2026-03-01 10:00:22.983 255021 DEBUG nova.virt.libvirt.volume.mount [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Mar  1 05:00:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:00:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:00:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:23.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888003f80 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:23 np0005634532 python3.9[256249]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.770 255021 INFO nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Libvirt host capabilities <capabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <host>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <uuid>6160888c-43c9-4b54-bedd-c53838a90ca3</uuid>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <arch>x86_64</arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model>EPYC-Rome-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <vendor>AMD</vendor>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <microcode version='16777317'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <signature family='23' model='49' stepping='0'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <maxphysaddr mode='emulate' bits='40'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='x2apic'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='tsc-deadline'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='osxsave'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='hypervisor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='tsc_adjust'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='spec-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='stibp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='arch-capabilities'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='cmp_legacy'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='topoext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='virt-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='lbrv'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='tsc-scale'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='vmcb-clean'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='pause-filter'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='pfthreshold'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='svme-addr-chk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='rdctl-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='skip-l1dfl-vmentry'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='mds-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature name='pschange-mc-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <pages unit='KiB' size='4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <pages unit='KiB' size='2048'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <pages unit='KiB' size='1048576'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <power_management>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <suspend_mem/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </power_management>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <iommu support='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <migration_features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <live/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <uri_transports>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <uri_transport>tcp</uri_transport>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <uri_transport>rdma</uri_transport>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </uri_transports>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </migration_features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <topology>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <cells num='1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <cell id='0'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <memory unit='KiB'>7864280</memory>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <pages unit='KiB' size='4'>1966070</pages>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <pages unit='KiB' size='2048'>0</pages>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <pages unit='KiB' size='1048576'>0</pages>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <distances>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <sibling id='0' value='10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          </distances>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          <cpus num='8'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:          </cpus>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        </cell>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </cells>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </topology>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <cache>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </cache>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <secmodel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model>selinux</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <doi>0</doi>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </secmodel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <secmodel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model>dac</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <doi>0</doi>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <baselabel type='kvm'>+107:+107</baselabel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <baselabel type='qemu'>+107:+107</baselabel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </secmodel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </host>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <guest>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <os_type>hvm</os_type>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <arch name='i686'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <wordsize>32</wordsize>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <domain type='qemu'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <domain type='kvm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <pae/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <nonpae/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <acpi default='on' toggle='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <apic default='on' toggle='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <cpuselection/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <deviceboot/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <disksnapshot default='on' toggle='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <externalSnapshot/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </guest>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <guest>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <os_type>hvm</os_type>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <arch name='x86_64'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <wordsize>64</wordsize>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <domain type='qemu'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <domain type='kvm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <acpi default='on' toggle='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <apic default='on' toggle='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <cpuselection/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <deviceboot/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <disksnapshot default='on' toggle='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <externalSnapshot/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </guest>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 
Mar  1 05:00:23 np0005634532 nova_compute[255017]: </capabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: #033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.781 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.803 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Mar  1 05:00:23 np0005634532 nova_compute[255017]: <domainCapabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <domain>kvm</domain>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <machine>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <arch>i686</arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <vcpu max='240'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <iothreads supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <os supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='firmware'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <loader supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>rom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pflash</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='readonly'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>yes</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='secure'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </loader>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </os>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='maximumMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <vendor>AMD</vendor>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='succor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='custom' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Dhyana-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v6'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v7'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <memoryBacking supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='sourceType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>file</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>anonymous</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>memfd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </memoryBacking>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <devices>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <disk supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='diskDevice'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>disk</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>cdrom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>floppy</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>lun</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ide</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>fdc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>sata</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </disk>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <graphics supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vnc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>egl-headless</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </graphics>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <video supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='modelType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vga</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>cirrus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>none</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>bochs</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ramfb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </video>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <hostdev supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='mode'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>subsystem</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='startupPolicy'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>mandatory</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>requisite</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>optional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='subsysType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pci</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='capsType'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='pciBackend'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </hostdev>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <rng supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>random</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>egd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </rng>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <filesystem supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='driverType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>path</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>handle</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtiofs</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </filesystem>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <tpm supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tpm-tis</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tpm-crb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>emulator</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>external</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendVersion'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>2.0</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </tpm>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <redirdev supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </redirdev>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <channel supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </channel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <crypto supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>qemu</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </crypto>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <interface supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>passt</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </interface>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <panic supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>isa</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>hyperv</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </panic>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <console supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>null</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dev</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>file</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pipe</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>stdio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>udp</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tcp</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>qemu-vdagent</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </console>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </devices>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <gic supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <vmcoreinfo supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <genid supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <backingStoreInput supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <backup supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <async-teardown supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <s390-pv supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <ps2 supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <tdx supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <sev supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <sgx supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <hyperv supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='features'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>relaxed</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vapic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>spinlocks</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vpindex</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>runtime</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>synic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>stimer</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>reset</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vendor_id</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>frequencies</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>reenlightenment</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tlbflush</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ipi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>avic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>emsr_bitmap</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>xmm_input</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <defaults>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <spinlocks>4095</spinlocks>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <stimer_direct>on</stimer_direct>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <tlbflush_direct>on</tlbflush_direct>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <tlbflush_extended>on</tlbflush_extended>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <vendor_id>Linux KVM Hv</vendor_id>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </defaults>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </hyperv>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <launchSecurity supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: </domainCapabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.809 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Mar  1 05:00:23 np0005634532 nova_compute[255017]: <domainCapabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <domain>kvm</domain>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <machine>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <arch>i686</arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <vcpu max='4096'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <iothreads supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <os supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='firmware'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <loader supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>rom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pflash</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='readonly'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>yes</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='secure'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </loader>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </os>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='maximumMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <vendor>AMD</vendor>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='succor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='custom' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Dhyana-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:00:23.872 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:00:23.873 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:00:23.873 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v6'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v7'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <memoryBacking supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='sourceType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>file</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>anonymous</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>memfd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </memoryBacking>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <devices>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <disk supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='diskDevice'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>disk</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>cdrom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>floppy</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>lun</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>fdc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>sata</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </disk>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <graphics supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vnc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>egl-headless</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </graphics>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <video supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='modelType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vga</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>cirrus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>none</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>bochs</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ramfb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </video>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <hostdev supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='mode'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>subsystem</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='startupPolicy'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>mandatory</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>requisite</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>optional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='subsysType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pci</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='capsType'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='pciBackend'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </hostdev>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <rng supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>random</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>egd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </rng>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <filesystem supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='driverType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>path</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>handle</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>virtiofs</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </filesystem>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <tpm supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tpm-tis</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tpm-crb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>emulator</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>external</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendVersion'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>2.0</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </tpm>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <redirdev supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </redirdev>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <channel supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </channel>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <crypto supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>qemu</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </crypto>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <interface supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='backendType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>passt</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </interface>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <panic supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>isa</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>hyperv</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </panic>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <console supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>null</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dev</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>file</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pipe</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>stdio</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>udp</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tcp</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>qemu-vdagent</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </console>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </devices>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <gic supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <vmcoreinfo supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <genid supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <backingStoreInput supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <backup supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <async-teardown supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <s390-pv supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <ps2 supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <tdx supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <sev supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <sgx supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <hyperv supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='features'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>relaxed</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vapic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>spinlocks</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vpindex</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>runtime</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>synic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>stimer</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>reset</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>vendor_id</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>frequencies</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>reenlightenment</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>tlbflush</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ipi</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>avic</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>emsr_bitmap</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>xmm_input</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <defaults>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <spinlocks>4095</spinlocks>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <stimer_direct>on</stimer_direct>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <tlbflush_direct>on</tlbflush_direct>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <tlbflush_extended>on</tlbflush_extended>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <vendor_id>Linux KVM Hv</vendor_id>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </defaults>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </hyperv>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <launchSecurity supported='no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </features>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: </domainCapabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.853 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Mar  1 05:00:23 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.858 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Mar  1 05:00:23 np0005634532 nova_compute[255017]: <domainCapabilities>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <domain>kvm</domain>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <machine>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <arch>x86_64</arch>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <vcpu max='240'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <iothreads supported='yes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <os supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='firmware'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <loader supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>rom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>pflash</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='readonly'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>yes</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='secure'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </loader>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </os>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='maximumMigratable'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <vendor>AMD</vendor>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='succor'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <mode name='custom' supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Dhyana-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_18] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-noTSX'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v6'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v7'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v5'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v2'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v3'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v4'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='athlon-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='core2duo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='coreduo-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='n270-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <blockers model='phenom-v1'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </cpu>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <memoryBacking supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <enum name='sourceType'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>file</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>anonymous</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <value>memfd</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  </memoryBacking>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:  <devices>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:    <disk supported='yes'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='diskDevice'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>disk</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>cdrom</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>floppy</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>lun</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>ide</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>fdc</value>
Mar  1 05:00:23 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>sata</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </disk>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <graphics supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vnc</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>egl-headless</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </graphics>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <video supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='modelType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vga</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>cirrus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>none</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>bochs</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>ramfb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </video>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <hostdev supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='mode'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>subsystem</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='startupPolicy'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>mandatory</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>requisite</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>optional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='subsysType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pci</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='capsType'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='pciBackend'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </hostdev>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <rng supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>random</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>egd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </rng>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <filesystem supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='driverType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>path</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>handle</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtiofs</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </filesystem>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <tpm supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tpm-tis</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tpm-crb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>emulator</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>external</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendVersion'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>2.0</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </tpm>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <redirdev supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </redirdev>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <channel supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </channel>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <crypto supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>qemu</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </crypto>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <interface supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>passt</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </interface>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <panic supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>isa</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>hyperv</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </panic>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <console supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>null</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vc</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dev</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>file</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pipe</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>stdio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>udp</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tcp</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>qemu-vdagent</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </console>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </devices>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <features>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <gic supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <vmcoreinfo supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <genid supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <backingStoreInput supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <backup supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <async-teardown supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <s390-pv supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <ps2 supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <tdx supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <sev supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <sgx supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <hyperv supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='features'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>relaxed</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vapic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>spinlocks</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vpindex</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>runtime</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>synic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>stimer</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>reset</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vendor_id</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>frequencies</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>reenlightenment</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tlbflush</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>ipi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>avic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>emsr_bitmap</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>xmm_input</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <defaults>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <spinlocks>4095</spinlocks>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <stimer_direct>on</stimer_direct>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <tlbflush_direct>on</tlbflush_direct>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <tlbflush_extended>on</tlbflush_extended>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <vendor_id>Linux KVM Hv</vendor_id>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </defaults>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </hyperv>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <launchSecurity supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </features>
Mar  1 05:00:24 np0005634532 nova_compute[255017]: </domainCapabilities>
Mar  1 05:00:24 np0005634532 nova_compute[255017]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.929 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Mar  1 05:00:24 np0005634532 nova_compute[255017]: <domainCapabilities>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <domain>kvm</domain>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <machine>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <arch>x86_64</arch>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <vcpu max='4096'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <iothreads supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <os supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <enum name='firmware'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>efi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <loader supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>rom</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pflash</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='readonly'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>yes</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='secure'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>yes</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>no</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </loader>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </os>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <cpu>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='maximumMigratable'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>on</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>off</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <vendor>AMD</vendor>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='succor'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <mode name='custom' supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cascadelake-Server-v5'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='ClearwaterForest-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ddpd-u'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sha512'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sm3'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sm4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Cooperlake-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Denverton'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Denverton-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Dhyana-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Genoa-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Milan-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Rome-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-Turin-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amd-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='auto-ibrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='perfmon-v2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbpb'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='stibp-always-on'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='EPYC-v5'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='GraniteRapids-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-128'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-256'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx10-512'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='prefetchiti'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-noTSX-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Haswell-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-noTSX'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v5'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v6'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Icelake-Server-v7'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='IvyBridge-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='KnightsMill-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512er'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512pf'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G4-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Opteron_G5-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fma4'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tbm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xop'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SapphireRapids-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='amx-tile'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-bf16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-fp16'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bitalg'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrc'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fzrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='la57'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='taa-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SierraForest'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='SierraForest-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ifma'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cmpccxadd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fbsdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='fsrs'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ibrs-all'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='intel-psfd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='lam'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mcdt-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pbrsb-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='psdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='serialize'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vaes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Client-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='hle'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='rtm'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Skylake-Server-v5'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512bw'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512cd'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512dq'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512f'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='avx512vl'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='invpcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pcid'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='pku'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Snowridge'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='mpx'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v2'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v3'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='core-capability'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='split-lock-detect'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='Snowridge-v4'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='cldemote'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='erms'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='gfni'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdir64b'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='movdiri'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='xsaves'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='athlon'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='athlon-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='core2duo'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='core2duo-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='coreduo'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='coreduo-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='n270'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='n270-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='ss'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='phenom'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <blockers model='phenom-v1'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnow'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <feature name='3dnowext'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </blockers>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </mode>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </cpu>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <memoryBacking supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <enum name='sourceType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>file</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>anonymous</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <value>memfd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </memoryBacking>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <devices>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <disk supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='diskDevice'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>disk</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>cdrom</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>floppy</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>lun</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>fdc</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>sata</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </disk>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <graphics supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vnc</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>egl-headless</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </graphics>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <video supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='modelType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vga</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>cirrus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>none</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>bochs</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>ramfb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </video>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <hostdev supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='mode'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>subsystem</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='startupPolicy'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>mandatory</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>requisite</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>optional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='subsysType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pci</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>scsi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='capsType'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='pciBackend'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </hostdev>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <rng supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtio-non-transitional</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>random</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>egd</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </rng>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <filesystem supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='driverType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>path</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>handle</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>virtiofs</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </filesystem>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <tpm supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tpm-tis</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tpm-crb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>emulator</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>external</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendVersion'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>2.0</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </tpm>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <redirdev supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='bus'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>usb</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </redirdev>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <channel supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </channel>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <crypto supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>qemu</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendModel'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>builtin</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </crypto>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <interface supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='backendType'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>default</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>passt</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </interface>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <panic supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='model'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>isa</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>hyperv</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </panic>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <console supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='type'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>null</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vc</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pty</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dev</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>file</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>pipe</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>stdio</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>udp</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tcp</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>unix</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>qemu-vdagent</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>dbus</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </console>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </devices>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  <features>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <gic supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <vmcoreinfo supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <genid supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <backingStoreInput supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <backup supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <async-teardown supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <s390-pv supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <ps2 supported='yes'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <tdx supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <sev supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <sgx supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <hyperv supported='yes'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <enum name='features'>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>relaxed</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vapic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>spinlocks</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vpindex</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>runtime</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>synic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>stimer</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>reset</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>vendor_id</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>frequencies</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>reenlightenment</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>tlbflush</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>ipi</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>avic</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>emsr_bitmap</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <value>xmm_input</value>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </enum>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      <defaults>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <spinlocks>4095</spinlocks>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <stimer_direct>on</stimer_direct>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <tlbflush_direct>on</tlbflush_direct>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <tlbflush_extended>on</tlbflush_extended>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:        <vendor_id>Linux KVM Hv</vendor_id>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:      </defaults>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    </hyperv>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:    <launchSecurity supported='no'/>
Mar  1 05:00:24 np0005634532 nova_compute[255017]:  </features>
Mar  1 05:00:24 np0005634532 nova_compute[255017]: </domainCapabilities>
Mar  1 05:00:24 np0005634532 nova_compute[255017]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.991 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.991 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.992 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:23.998 255021 INFO nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Secure Boot support detected#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.000 255021 INFO nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.000 255021 INFO nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.012 255021 DEBUG nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.047 255021 INFO nova.virt.node [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Determined node identity 018d246d-1e01-4168-9128-598c5501111b from /var/lib/nova/compute_id#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.064 255021 WARNING nova.compute.manager [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Compute nodes ['018d246d-1e01-4168-9128-598c5501111b'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.095 255021 INFO nova.compute.manager [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.132 255021 WARNING nova.compute.manager [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.132 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.132 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.133 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.133 255021 DEBUG nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.133 255021 DEBUG oslo_concurrency.processutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:00:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:24.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:24 np0005634532 python3.9[256433]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:00:24 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3407671814' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.581 255021 DEBUG oslo_concurrency.processutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:00:24 np0005634532 systemd[1]: Starting libvirt nodedev daemon...
Mar  1 05:00:24 np0005634532 systemd[1]: Started libvirt nodedev daemon.
Mar  1 05:00:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v530: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 B/s wr, 0 op/s
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.816 255021 WARNING nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.817 255021 DEBUG nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4887MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.818 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.818 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.855 255021 WARNING nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] No compute node record for compute-0.ctlplane.example.com:018d246d-1e01-4168-9128-598c5501111b: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 018d246d-1e01-4168-9128-598c5501111b could not be found.#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.874 255021 INFO nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 018d246d-1e01-4168-9128-598c5501111b#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.937 255021 DEBUG nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:00:24 np0005634532 nova_compute[255017]: 2026-03-01 10:00:24.938 255021 DEBUG nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:00:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:25.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:25 np0005634532 python3.9[256610]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Mar  1 05:00:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884008f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:25 np0005634532 nova_compute[255017]: 2026-03-01 10:00:25.810 255021 INFO nova.scheduler.client.report [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] [req-2d6244b3-6a8d-4b1d-acf1-9fa6f31ff320] Created resource provider record via placement API for resource provider with UUID 018d246d-1e01-4168-9128-598c5501111b and name compute-0.ctlplane.example.com.#033[00m
Mar  1 05:00:25 np0005634532 nova_compute[255017]: 2026-03-01 10:00:25.861 255021 DEBUG oslo_concurrency.processutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:00:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:00:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/943563312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:00:26 np0005634532 python3.9[256786]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.303 255021 DEBUG oslo_concurrency.processutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.310 255021 DEBUG nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Mar  1 05:00:26 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:00:26 np0005634532 nova_compute[255017]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Mar  1 05:00:26 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.310 255021 INFO nova.virt.libvirt.host [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] kernel doesn't support AMD SEV#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.311 255021 DEBUG nova.compute.provider_tree [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.311 255021 DEBUG nova.virt.libvirt.driver [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Mar  1 05:00:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:00:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:26.318 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.389 255021 DEBUG nova.scheduler.client.report [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Updated inventory for provider 018d246d-1e01-4168-9128-598c5501111b with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.389 255021 DEBUG nova.compute.provider_tree [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Updating resource provider 018d246d-1e01-4168-9128-598c5501111b generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.390 255021 DEBUG nova.compute.provider_tree [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:00:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.483 255021 DEBUG nova.compute.provider_tree [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Updating resource provider 018d246d-1e01-4168-9128-598c5501111b generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.505 255021 DEBUG nova.compute.resource_tracker [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.506 255021 DEBUG oslo_concurrency.lockutils [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.506 255021 DEBUG nova.service [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.598 255021 DEBUG nova.service [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Mar  1 05:00:26 np0005634532 nova_compute[255017]: 2026-03-01 10:00:26.599 255021 DEBUG nova.servicegroup.drivers.db [None req-d7d4f2c5-95eb-450c-a544-456b3b4ebee9 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Mar  1 05:00:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v531: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:00:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:27] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:00:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:00:27] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:00:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:00:27.071Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:00:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:00:27.072Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:00:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:27.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:27 np0005634532 podman[256834]: 2026-03-01 10:00:27.387966421 +0000 UTC m=+0.071227081 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Mar  1 05:00:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884008f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:28 np0005634532 python3.9[256983]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Mar  1 05:00:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:28.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:28 np0005634532 systemd[1]: Stopping nova_compute container...
Mar  1 05:00:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:28 np0005634532 nova_compute[255017]: 2026-03-01 10:00:28.665 255021 WARNING amqp [-] Received method (60, 30) during closing channel 1. This method will be ignored#033[00m
Mar  1 05:00:28 np0005634532 nova_compute[255017]: 2026-03-01 10:00:28.668 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:00:28 np0005634532 nova_compute[255017]: 2026-03-01 10:00:28.668 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:00:28 np0005634532 nova_compute[255017]: 2026-03-01 10:00:28.668 255021 DEBUG oslo_concurrency.lockutils [None req-783190db-a9a9-4c08-8f22-184cacd2739e - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:00:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v532: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 05:00:29 np0005634532 virtqemud[256058]: libvirt version: 11.10.0, package: 4.el9 (builder@centos.org, 2026-01-29-15:25:17, )
Mar  1 05:00:29 np0005634532 virtqemud[256058]: hostname: compute-0
Mar  1 05:00:29 np0005634532 virtqemud[256058]: End of file while reading data: Input/output error
Mar  1 05:00:29 np0005634532 systemd[1]: libpod-f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b.scope: Deactivated successfully.
Mar  1 05:00:29 np0005634532 systemd[1]: libpod-f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b.scope: Consumed 4.176s CPU time.
Mar  1 05:00:29 np0005634532 podman[256988]: 2026-03-01 10:00:29.160350637 +0000 UTC m=+0.808551536 container died f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=nova_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:00:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:29.179 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b-userdata-shm.mount: Deactivated successfully.
Mar  1 05:00:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1-merged.mount: Deactivated successfully.
Mar  1 05:00:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:00:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:30.324 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:00:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884008f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:30 np0005634532 podman[256988]: 2026-03-01 10:00:30.657329422 +0000 UTC m=+2.305530321 container cleanup f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=nova_compute, config_id=nova_compute, managed_by=edpm_ansible)
Mar  1 05:00:30 np0005634532 podman[256988]: nova_compute
Mar  1 05:00:30 np0005634532 podman[257018]: nova_compute
Mar  1 05:00:30 np0005634532 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Mar  1 05:00:30 np0005634532 systemd[1]: Stopped nova_compute container.
Mar  1 05:00:30 np0005634532 systemd[1]: Starting nova_compute container...
Mar  1 05:00:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v533: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:00:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/551bcfa6bc51857e60a041a254542f85238a033f6f416edd18eee02c5a69fed1/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:30 np0005634532 podman[257032]: 2026-03-01 10:00:30.883117381 +0000 UTC m=+0.124353367 container init f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=nova_compute, container_name=nova_compute, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.43.0)
Mar  1 05:00:30 np0005634532 podman[257032]: 2026-03-01 10:00:30.889240522 +0000 UTC m=+0.130476468 container start f62d2091cb66c1b50b916da035b36356165fa487135d71e10f5fe8390617ca1b (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-1a042de936b2d110ee8d2a8cbebfb950a6f3e21b0a41acc6ce59d0ee581b683e-08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'nova', 'volumes': ['/var/lib/kolla/config_files/nova_compute.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/nova:/var/lib/kolla/config_files/src:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/src/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, config_id=nova_compute, container_name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 05:00:30 np0005634532 podman[257032]: nova_compute
Mar  1 05:00:30 np0005634532 nova_compute[257049]: + sudo -E kolla_set_configs
Mar  1 05:00:30 np0005634532 systemd[1]: Started nova_compute container.
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Validating config file
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying service configuration files
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /etc/ceph
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Creating directory /etc/ceph
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/ceph
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.conf to /etc/ceph/ceph.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ssh-config to /var/lib/nova/.ssh/config
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Copying /var/lib/kolla/config_files/src/run-on-host to /usr/sbin/iscsiadm
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Writing out command to execute
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:30 np0005634532 nova_compute[257049]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Mar  1 05:00:30 np0005634532 nova_compute[257049]: ++ cat /run_command
Mar  1 05:00:30 np0005634532 nova_compute[257049]: + CMD=nova-compute
Mar  1 05:00:30 np0005634532 nova_compute[257049]: + ARGS=
Mar  1 05:00:30 np0005634532 nova_compute[257049]: + sudo kolla_copy_cacerts
Mar  1 05:00:31 np0005634532 nova_compute[257049]: + [[ ! -n '' ]]
Mar  1 05:00:31 np0005634532 nova_compute[257049]: + . kolla_extend_start
Mar  1 05:00:31 np0005634532 nova_compute[257049]: + echo 'Running command: '\''nova-compute'\'''
Mar  1 05:00:31 np0005634532 nova_compute[257049]: Running command: 'nova-compute'
Mar  1 05:00:31 np0005634532 nova_compute[257049]: + umask 0022
Mar  1 05:00:31 np0005634532 nova_compute[257049]: + exec nova-compute
Mar  1 05:00:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:31.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_19] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878003620 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:00:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_15] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868002810 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:32 np0005634532 python3.9[257215]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Mar  1 05:00:32 np0005634532 systemd[1]: Started libpod-conmon-431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f.scope.
Mar  1 05:00:32 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:00:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a24aa2659f37bfe22ff6e4fe5b8469e7cda8f415cf73caac023b2f72dfc0ab/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a24aa2659f37bfe22ff6e4fe5b8469e7cda8f415cf73caac023b2f72dfc0ab/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:32 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8a24aa2659f37bfe22ff6e4fe5b8469e7cda8f415cf73caac023b2f72dfc0ab/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Mar  1 05:00:32 np0005634532 podman[257240]: 2026-03-01 10:00:32.195661294 +0000 UTC m=+0.088361253 container init 431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Mar  1 05:00:32 np0005634532 podman[257240]: 2026-03-01 10:00:32.201490207 +0000 UTC m=+0.094190156 container start 431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute_init, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:00:32 np0005634532 python3.9[257215]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Applying nova statedir ownership
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Mar  1 05:00:32 np0005634532 nova_compute_init[257262]: INFO:nova_statedir:Nova statedir ownership complete
Mar  1 05:00:32 np0005634532 systemd[1]: libpod-431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f.scope: Deactivated successfully.
Mar  1 05:00:32 np0005634532 podman[257280]: 2026-03-01 10:00:32.294829151 +0000 UTC m=+0.021286524 container died 431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute_init, config_id=nova_compute_init, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, container_name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true)
Mar  1 05:00:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f-userdata-shm.mount: Deactivated successfully.
Mar  1 05:00:32 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e8a24aa2659f37bfe22ff6e4fe5b8469e7cda8f415cf73caac023b2f72dfc0ab-merged.mount: Deactivated successfully.
Mar  1 05:00:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:32 np0005634532 podman[257280]: 2026-03-01 10:00:32.328572381 +0000 UTC m=+0.055029764 container cleanup 431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355, name=nova_compute_init, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False, 'EDPM_CONFIG_HASH': '08e38e514bc79f65b494f100befbfbcc9d744711daee2ed7859d469aa8abeea3'}, 'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:c37f1275cfc51fc2227bf56a2be6262158f62fb30ca651f154bed25112d6d355', 'net': 'none', 'privileged': False, 'restart': 'never', 'security_opt': ['label=disable'], 'user': 'root', 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=nova_compute_init, container_name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:00:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:00:32.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:32 np0005634532 systemd[1]: libpod-conmon-431eacd7efb5d4afbd4933c532d546165bdd0f2965f01e52fbafe131f186cc9f.scope: Deactivated successfully.
Mar  1 05:00:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_16] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640040e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:00:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.798 257053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.799 257053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.799 257053 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.799 257053 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Mar  1 05:00:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v534: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.944 257053 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.967 257053 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.024s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:00:32 np0005634532 nova_compute[257049]: 2026-03-01 10:00:32.968 257053 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Mar  1 05:00:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:00:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:00:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:00:33.183 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:00:33 np0005634532 systemd[1]: session-54.scope: Deactivated successfully.
Mar  1 05:00:33 np0005634532 systemd[1]: session-54.scope: Consumed 1min 49.764s CPU time.
Mar  1 05:00:33 np0005634532 systemd-logind[832]: Session 54 logged out. Waiting for processes to exit.
Mar  1 05:00:33 np0005634532 systemd-logind[832]: Removed session 54.
Mar  1 05:00:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:00:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884008f20 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.425 257053 INFO nova.virt.driver [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.556 257053 INFO nova.compute.provider_config [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.565 257053 DEBUG oslo_concurrency.lockutils [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.565 257053 DEBUG oslo_concurrency.lockutils [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.566 257053 DEBUG oslo_concurrency.lockutils [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.566 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.566 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.566 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.566 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.567 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.568 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.568 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.568 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.568 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.568 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.569 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.570 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.570 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.570 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.570 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.570 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.571 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.571 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.571 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.571 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.572 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.572 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.572 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.572 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.572 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.573 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.573 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.573 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.573 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.573 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.574 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.574 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.574 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.574 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.575 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.575 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.575 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.575 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.575 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.576 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.576 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.576 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.576 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.576 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.577 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.577 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.577 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.577 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.577 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.578 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.579 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.580 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.580 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.580 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.580 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.580 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.581 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.582 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.583 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.584 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.584 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.584 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.584 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.584 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.585 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.586 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.587 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.588 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.589 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.590 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.591 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.591 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.591 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.591 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.591 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.592 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.592 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.592 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.592 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.592 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.593 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.594 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.595 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.596 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.596 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.596 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.596 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.596 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.597 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.597 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.597 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.597 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.597 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.598 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.599 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.600 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.601 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.602 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.602 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.602 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.602 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.602 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.603 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.603 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.603 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.603 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.604 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.os_region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.605 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.606 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.607 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.607 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.607 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.607 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.608 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.609 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.610 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.611 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.612 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.613 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.614 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.615 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.616 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.617 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.618 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.619 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.620 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.621 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.622 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.623 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.624 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.625 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.625 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.625 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.625 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.625 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.626 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.627 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.628 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.629 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.630 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.barbican_region_name  = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.631 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.632 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.633 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.634 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.635 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.636 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.637 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.638 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.639 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.640 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.641 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.642 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.642 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.642 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.642 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.642 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.643 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.643 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.643 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.643 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.643 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.644 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.644 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.644 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.644 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.644 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.645 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.646 257053 WARNING oslo_config.cfg [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Mar  1 05:00:33 np0005634532 nova_compute[257049]: live_migration_uri is deprecated for removal in favor of two other options that
Mar  1 05:00:33 np0005634532 nova_compute[257049]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Mar  1 05:00:33 np0005634532 nova_compute[257049]: and ``live_migration_inbound_addr`` respectively.
Mar  1 05:00:33 np0005634532 nova_compute[257049]: ).  Its value may be silently ignored in the future.#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.647 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.648 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.649 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rbd_secret_uuid        = 437b1e74-f995-5d64-af1d-257ce01d77ab log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.650 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.651 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.652 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.653 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.654 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.655 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.655 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.655 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.655 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.655 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.656 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.657 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.658 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.659 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.660 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.661 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.662 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.663 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.664 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.665 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.666 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.667 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.668 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.669 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.670 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.671 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.672 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.673 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.674 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.675 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.676 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.677 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.678 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.679 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.680 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.681 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.682 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.683 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.684 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.685 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.686 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.687 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.688 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.689 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.690 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.691 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.692 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.693 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.694 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.695 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.696 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.697 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.698 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.699 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.700 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.701 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.702 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.703 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.704 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.705 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.706 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.707 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.708 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.709 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.710 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.710 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.710 257053 DEBUG oslo_service.service [None req-e6085070-4fcc-4ccb-bdbf-1a6fd5cbd422 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.711 257053 INFO nova.service [-] Starting compute node (version 27.5.2-0.20260220085704.5cfeecb.el9)#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.725 257053 INFO nova.virt.node [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Determined node identity 018d246d-1e01-4168-9128-598c5501111b from /var/lib/nova/compute_id#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.726 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.727 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.727 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.727 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.738 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f07b1e54af0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.740 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f07b1e54af0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.740 257053 INFO nova.virt.libvirt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Connection event '1' reason 'None'#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.745 257053 INFO nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Libvirt host capabilities <capabilities>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <host>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <uuid>6160888c-43c9-4b54-bedd-c53838a90ca3</uuid>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <cpu>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <arch>x86_64</arch>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model>EPYC-Rome-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <vendor>AMD</vendor>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <microcode version='16777317'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <signature family='23' model='49' stepping='0'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <maxphysaddr mode='emulate' bits='40'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='x2apic'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='tsc-deadline'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='osxsave'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='hypervisor'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='tsc_adjust'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='spec-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='stibp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='arch-capabilities'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='cmp_legacy'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='topoext'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='virt-ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='lbrv'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='tsc-scale'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='vmcb-clean'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='pause-filter'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='pfthreshold'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='svme-addr-chk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='rdctl-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='skip-l1dfl-vmentry'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='mds-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature name='pschange-mc-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <pages unit='KiB' size='4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <pages unit='KiB' size='2048'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <pages unit='KiB' size='1048576'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </cpu>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <power_management>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <suspend_mem/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </power_management>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <iommu support='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <migration_features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <live/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <uri_transports>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <uri_transport>tcp</uri_transport>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <uri_transport>rdma</uri_transport>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </uri_transports>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </migration_features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <topology>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <cells num='1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <cell id='0'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <memory unit='KiB'>7864280</memory>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <pages unit='KiB' size='4'>1966070</pages>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <pages unit='KiB' size='2048'>0</pages>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <pages unit='KiB' size='1048576'>0</pages>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <distances>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <sibling id='0' value='10'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          </distances>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          <cpus num='8'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:          </cpus>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        </cell>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </cells>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </topology>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <cache>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </cache>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <secmodel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model>selinux</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <doi>0</doi>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </secmodel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <secmodel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model>dac</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <doi>0</doi>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <baselabel type='kvm'>+107:+107</baselabel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <baselabel type='qemu'>+107:+107</baselabel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </secmodel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </host>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <guest>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <os_type>hvm</os_type>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <arch name='i686'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <wordsize>32</wordsize>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <domain type='qemu'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <domain type='kvm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </arch>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <pae/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <nonpae/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <acpi default='on' toggle='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <apic default='on' toggle='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <cpuselection/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <deviceboot/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <disksnapshot default='on' toggle='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <externalSnapshot/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </guest>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <guest>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <os_type>hvm</os_type>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <arch name='x86_64'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <wordsize>64</wordsize>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <domain type='qemu'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <domain type='kvm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </arch>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <acpi default='on' toggle='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <apic default='on' toggle='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <cpuselection/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <deviceboot/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <disksnapshot default='on' toggle='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <externalSnapshot/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </guest>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 
Mar  1 05:00:33 np0005634532 nova_compute[257049]: </capabilities>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: #033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.750 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.755 257053 DEBUG nova.virt.libvirt.volume.mount [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.755 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Mar  1 05:00:33 np0005634532 nova_compute[257049]: <domainCapabilities>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <domain>kvm</domain>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <machine>pc-q35-rhel9.8.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <arch>i686</arch>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <vcpu max='4096'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <iothreads supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <os supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <enum name='firmware'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <loader supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>rom</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pflash</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='readonly'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>yes</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>no</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='secure'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>no</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </loader>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <cpu>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>on</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>off</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='maximumMigratable'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>on</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>off</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <vendor>AMD</vendor>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='succor'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='custom' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v5'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='ClearwaterForest-v1'>ClearwaterForest</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='ClearwaterForest'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ddpd-u'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='intel-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='lam'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sha512'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sm3'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sm4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>ClearwaterForest-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='ClearwaterForest-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ddpd-u'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='intel-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='lam'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sha512'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sm3'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sm4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cooperlake'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cooperlake-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cooperlake-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Denverton'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mpx'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Denverton-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mpx'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Denverton-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Denverton-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Dhyana-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Genoa'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='auto-ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Genoa-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='auto-ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Genoa-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='auto-ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='perfmon-v2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Milan'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Milan-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Milan-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Milan-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Milan-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Rome'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Rome-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Rome-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Rome-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='EPYC-Turin-v1'>EPYC-Turin</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Turin'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='auto-ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='perfmon-v2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbpb'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-Turin-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-Turin-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amd-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='auto-ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vp2intersect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fs-gs-base-ns'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibpb-brtype'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='no-nested-data-bp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='null-sel-clr-base'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='perfmon-v2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbpb'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='srso-user-kernel-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='stibp-always-on'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>EPYC-v5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='EPYC-v5'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='GraniteRapids'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='GraniteRapids-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='GraniteRapids-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-128'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-256'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-512'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>GraniteRapids-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='GraniteRapids-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-128'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-256'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx10-512'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='prefetchiti'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-noTSX-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Haswell-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v5'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v6'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Icelake-Server-v7'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='IvyBridge'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='IvyBridge-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='IvyBridge-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='IvyBridge-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='KnightsMill'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512er'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512pf'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='KnightsMill-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-4fmaps'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-4vnniw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512er'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512pf'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Opteron_G4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fma4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xop'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Opteron_G4-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fma4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xop'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Opteron_G5'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fma4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tbm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xop'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Opteron_G5-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fma4'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tbm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xop'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SapphireRapids'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SapphireRapids-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SapphireRapids-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SapphireRapids-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SapphireRapids-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SapphireRapids-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='amx-tile'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-bf16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-fp16'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512-vpopcntdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bitalg'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vbmi2'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrc'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fzrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='la57'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='taa-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='tsx-ldtrk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SierraForest'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SierraForest-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SierraForest-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SierraForest-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='intel-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='lam'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>SierraForest-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='SierraForest-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ifma'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-ne-convert'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx-vnni-int8'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bhi-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='bus-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cmpccxadd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fbsdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='fsrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='intel-psfd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ipred-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='lam'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mcdt-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pbrsb-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='psdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rrsba-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='sbdr-ssdp-no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='serialize'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vaes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='vpclmulqdq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Client-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Skylake-Server-v5'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Snowridge'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='core-capability'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mpx'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='split-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Snowridge-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='core-capability'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='mpx'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='split-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Snowridge-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='core-capability'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='split-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Snowridge-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='core-capability'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='split-lock-detect'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Snowridge-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='cldemote'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='gfni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdir64b'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='movdiri'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='athlon'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnow'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnowext'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='athlon-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnow'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnowext'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='core2duo'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='core2duo-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='coreduo'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='coreduo-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='n270'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='n270-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ss'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='phenom'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnow'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnowext'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='phenom-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnow'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='3dnowext'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </cpu>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <memoryBacking supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <enum name='sourceType'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <value>file</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <value>anonymous</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <value>memfd</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </memoryBacking>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <devices>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <disk supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='diskDevice'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>disk</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>cdrom</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>floppy</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>lun</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='bus'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>fdc</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>scsi</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>usb</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>sata</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='model'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio-transitional</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio-non-transitional</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <graphics supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vnc</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>egl-headless</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>dbus</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </graphics>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <video supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='modelType'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vga</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>cirrus</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>none</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>bochs</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>ramfb</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </video>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <hostdev supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='mode'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>subsystem</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='startupPolicy'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>default</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>mandatory</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>requisite</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>optional</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='subsysType'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>usb</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pci</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>scsi</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='capsType'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='pciBackend'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </hostdev>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <rng supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='model'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio-transitional</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtio-non-transitional</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='backendModel'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>random</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>egd</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>builtin</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </rng>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <filesystem supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='driverType'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>path</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>handle</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>virtiofs</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </filesystem>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <tpm supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='model'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>tpm-tis</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>tpm-crb</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='backendModel'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>emulator</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>external</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='backendVersion'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>2.0</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </tpm>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <redirdev supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='bus'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>usb</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </redirdev>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <channel supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pty</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>unix</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </channel>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <crypto supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='model'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>qemu</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='backendModel'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>builtin</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </crypto>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <interface supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='backendType'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>default</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>passt</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </interface>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <panic supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='model'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>isa</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>hyperv</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </panic>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <console supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>null</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vc</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pty</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>dev</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>file</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pipe</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>stdio</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>udp</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>tcp</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>unix</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>qemu-vdagent</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>dbus</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </console>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </devices>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <gic supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <vmcoreinfo supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <genid supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <backingStoreInput supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <backup supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <async-teardown supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <s390-pv supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <ps2 supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <tdx supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <sev supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <sgx supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <hyperv supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='features'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>relaxed</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vapic</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>spinlocks</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vpindex</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>runtime</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>synic</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>stimer</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>reset</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>vendor_id</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>frequencies</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>reenlightenment</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>tlbflush</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>ipi</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>avic</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>emsr_bitmap</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>xmm_input</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <defaults>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <spinlocks>4095</spinlocks>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <stimer_direct>on</stimer_direct>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <tlbflush_direct>on</tlbflush_direct>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <tlbflush_extended>on</tlbflush_extended>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <vendor_id>Linux KVM Hv</vendor_id>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </defaults>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </hyperv>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <launchSecurity supported='no'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </features>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: </domainCapabilities>
Mar  1 05:00:33 np0005634532 nova_compute[257049]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Mar  1 05:00:33 np0005634532 nova_compute[257049]: 2026-03-01 10:00:33.762 257053 DEBUG nova.virt.libvirt.host [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Mar  1 05:00:33 np0005634532 nova_compute[257049]: <domainCapabilities>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <path>/usr/libexec/qemu-kvm</path>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <domain>kvm</domain>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <machine>pc-i440fx-rhel7.6.0</machine>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <arch>i686</arch>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <vcpu max='240'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <iothreads supported='yes'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <os supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <enum name='firmware'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <loader supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='type'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>rom</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>pflash</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='readonly'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>yes</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>no</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='secure'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>no</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </loader>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:  <cpu>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='host-passthrough' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='hostPassthroughMigratable'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>on</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>off</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='maximum' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <enum name='maximumMigratable'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>on</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <value>off</value>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </enum>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='host-model' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model fallback='forbid'>EPYC-Rome</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <vendor>AMD</vendor>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <maxphysaddr mode='passthrough' limit='40'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='x2apic'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc-deadline'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='hypervisor'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc_adjust'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='spec-ctrl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='stibp'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='cmp_legacy'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='overflow-recov'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='succor'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='ibrs'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='amd-ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='virt-ssbd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='lbrv'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='tsc-scale'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='vmcb-clean'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='flushbyasid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='pause-filter'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='pfthreshold'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='svme-addr-chk'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='require' name='lfence-always-serializing'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <feature policy='disable' name='xsaves'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    </mode>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:    <mode name='custom' supported='yes'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-noTSX-IBRS'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Broadwell-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-noTSX'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v1'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v2'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='hle'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='rtm'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v3'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pcid'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='pku'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      </blockers>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:      <blockers model='Cascadelake-Server-v4'>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512bw'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512cd'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512dq'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512f'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vl'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='avx512vnni'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='erms'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='ibrs-all'/>
Mar  1 05:00:33 np0005634532 nova_compute[257049]:        <feature name='invpcid'/>
Mar  1 05:03:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_29] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:08 np0005634532 rsyslogd[1019]: imjournal: 7598 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Mar  1 05:03:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:08.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_30] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v612: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:09.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:10.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v613: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:11 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:11.125 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:03:11 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:11.126 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:03:11 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:11.127 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:03:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:11.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:12.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v614: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:13.349 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:14.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v615: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:03:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:15.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:16.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v616: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:17] "GET /metrics HTTP/1.1" 200 48342 "" "Prometheus/2.51.0"
Mar  1 05:03:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:17.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:03:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:17.177Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:03:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:17.177Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:03:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:03:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:17.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:03:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:03:17
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['images', '.nfs', 'default.rgw.log', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:03:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:03:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:03:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:03:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:18.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v617: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:19.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004870 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:03:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:20.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:03:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v618: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:21.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004890 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:22.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v619: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:23.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:23.875 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:03:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:23.875 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:03:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:03:23.875 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:03:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680048b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:24.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v620: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:03:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:25.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:26.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680048d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v621: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:03:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:27] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:03:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:27.178Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:03:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:27.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:27 np0005634532 podman[259615]: 2026-03-01 10:03:27.389545882 +0000 UTC m=+0.082287790 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Mar  1 05:03:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858001230 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:28.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v622: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:29.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680048f0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:30.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v623: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:31 np0005634532 podman[259646]: 2026-03-01 10:03:31.346295762 +0000 UTC m=+0.043006642 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 05:03:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:31.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004910 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:03:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:03:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:32.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v624: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:33.369 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002fe0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000002 to be held by another RGW process; skipping for now
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000003 to be held by another RGW process; skipping for now
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000005 to be held by another RGW process; skipping for now
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000007 to be held by another RGW process; skipping for now
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000008 to be held by another RGW process; skipping for now
Mar  1 05:03:33 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000010 to be held by another RGW process; skipping for now
Mar  1 05:03:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.170 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.171 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:34.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v625: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.973 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:03:34 np0005634532 nova_compute[257049]: 2026-03-01 10:03:34.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:03:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.104 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.105 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.105 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.105 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.105 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:03:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:35.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004930 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3032746714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.549 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.686 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.687 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4932MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.687 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.687 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.740 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.741 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:03:35 np0005634532 nova_compute[257049]: 2026-03-01 10:03:35.753 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:03:35 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:03:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003000 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2641030270' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:03:36 np0005634532 nova_compute[257049]: 2026-03-01 10:03:36.207 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:03:36 np0005634532 nova_compute[257049]: 2026-03-01 10:03:36.212 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.22086428 +0000 UTC m=+0.040723165 container create 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:03:36 np0005634532 nova_compute[257049]: 2026-03-01 10:03:36.228 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:03:36 np0005634532 nova_compute[257049]: 2026-03-01 10:03:36.229 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:03:36 np0005634532 nova_compute[257049]: 2026-03-01 10:03:36.230 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.542s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:03:36 np0005634532 systemd[1]: Started libpod-conmon-107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03.scope.
Mar  1 05:03:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.290255551 +0000 UTC m=+0.110114426 container init 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.295631264 +0000 UTC m=+0.115490149 container start 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.299201802 +0000 UTC m=+0.119060687 container attach 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.203910502 +0000 UTC m=+0.023769407 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:36 np0005634532 naughty_kirch[259932]: 167 167
Mar  1 05:03:36 np0005634532 systemd[1]: libpod-107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03.scope: Deactivated successfully.
Mar  1 05:03:36 np0005634532 conmon[259932]: conmon 107c8771edd4bde43f59 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03.scope/container/memory.events
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.303330244 +0000 UTC m=+0.123189139 container died 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-80a77a0b16b02e39a3297f119d25fa48f191f1a36151cca064d5cf7d7377ad50-merged.mount: Deactivated successfully.
Mar  1 05:03:36 np0005634532 podman[259914]: 2026-03-01 10:03:36.338572413 +0000 UTC m=+0.158431298 container remove 107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_kirch, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 05:03:36 np0005634532 systemd[1]: libpod-conmon-107c8771edd4bde43f5980d85952e84bbe95c8da941e2c17b7f5a1cadd6ced03.scope: Deactivated successfully.
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.475228492 +0000 UTC m=+0.034099252 container create e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Mar  1 05:03:36 np0005634532 systemd[1]: Started libpod-conmon-e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e.scope.
Mar  1 05:03:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.529122981 +0000 UTC m=+0.087993791 container init e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.537395905 +0000 UTC m=+0.096266665 container start e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.540771358 +0000 UTC m=+0.099642168 container attach e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.459829332 +0000 UTC m=+0.018700112 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:36.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:36 np0005634532 flamboyant_vaughan[259974]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:03:36 np0005634532 flamboyant_vaughan[259974]: --> All data devices are unavailable
Mar  1 05:03:36 np0005634532 systemd[1]: libpod-e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e.scope: Deactivated successfully.
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.8417656 +0000 UTC m=+0.400636360 container died e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a991c63b4d276839f2bc6ae51fb6756fe0412e8c03f8dc048db2fd74ce7156a2-merged.mount: Deactivated successfully.
Mar  1 05:03:36 np0005634532 podman[259957]: 2026-03-01 10:03:36.882885003 +0000 UTC m=+0.441755763 container remove e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=flamboyant_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:03:36 np0005634532 systemd[1]: libpod-conmon-e3a62e4c5fc38171e491fa287210d633f2a5c2cab9152a9516d28f405597f49e.scope: Deactivated successfully.
Mar  1 05:03:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v626: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.5 KiB/s rd, 0 B/s wr, 2 op/s
Mar  1 05:03:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:37] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:03:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:37] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:03:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:37.183Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:03:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:37.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.458153458 +0000 UTC m=+0.037593948 container create 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:03:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:37 np0005634532 systemd[1]: Started libpod-conmon-1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb.scope.
Mar  1 05:03:37 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.440232826 +0000 UTC m=+0.019673336 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.539726469 +0000 UTC m=+0.119167029 container init 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.547981263 +0000 UTC m=+0.127421753 container start 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.551160891 +0000 UTC m=+0.130601391 container attach 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 05:03:37 np0005634532 competent_chebyshev[260109]: 167 167
Mar  1 05:03:37 np0005634532 systemd[1]: libpod-1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb.scope: Deactivated successfully.
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.554306289 +0000 UTC m=+0.133746779 container died 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0d62070d2dc73734a51e1075dc29ce2a20499ac8851bd93eef2cd3399a761573-merged.mount: Deactivated successfully.
Mar  1 05:03:37 np0005634532 podman[260093]: 2026-03-01 10:03:37.58964621 +0000 UTC m=+0.169086740 container remove 1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_chebyshev, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:37 np0005634532 systemd[1]: libpod-conmon-1c4147eebd83c364b0e2be5510c2d8534208c6565c0411e3eba3981e6bbc3abb.scope: Deactivated successfully.
Mar  1 05:03:37 np0005634532 podman[260136]: 2026-03-01 10:03:37.729372615 +0000 UTC m=+0.036785028 container create 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:03:37 np0005634532 systemd[1]: Started libpod-conmon-49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c.scope.
Mar  1 05:03:37 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a2ca309f406c9328c7ba8d2f4415f3da48de2d7456adacacf21fed6c20587/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a2ca309f406c9328c7ba8d2f4415f3da48de2d7456adacacf21fed6c20587/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a2ca309f406c9328c7ba8d2f4415f3da48de2d7456adacacf21fed6c20587/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:37 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf8a2ca309f406c9328c7ba8d2f4415f3da48de2d7456adacacf21fed6c20587/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:37 np0005634532 podman[260136]: 2026-03-01 10:03:37.711950456 +0000 UTC m=+0.019362899 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:37 np0005634532 podman[260136]: 2026-03-01 10:03:37.812568666 +0000 UTC m=+0.119981079 container init 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:37 np0005634532 podman[260136]: 2026-03-01 10:03:37.818117663 +0000 UTC m=+0.125530056 container start 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 05:03:37 np0005634532 podman[260136]: 2026-03-01 10:03:37.820662146 +0000 UTC m=+0.128074539 container attach 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 05:03:38 np0005634532 elastic_kare[260152]: {
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:    "0": [
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:        {
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "devices": [
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "/dev/loop3"
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            ],
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "lv_name": "ceph_lv0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "lv_size": "21470642176",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "name": "ceph_lv0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "tags": {
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.cluster_name": "ceph",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.crush_device_class": "",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.encrypted": "0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.osd_id": "0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.type": "block",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.vdo": "0",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:                "ceph.with_tpm": "0"
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            },
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "type": "block",
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:            "vg_name": "ceph_vg0"
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:        }
Mar  1 05:03:38 np0005634532 elastic_kare[260152]:    ]
Mar  1 05:03:38 np0005634532 elastic_kare[260152]: }
Mar  1 05:03:38 np0005634532 systemd[1]: libpod-49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c.scope: Deactivated successfully.
Mar  1 05:03:38 np0005634532 podman[260136]: 2026-03-01 10:03:38.10308485 +0000 UTC m=+0.410497273 container died 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 05:03:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-bf8a2ca309f406c9328c7ba8d2f4415f3da48de2d7456adacacf21fed6c20587-merged.mount: Deactivated successfully.
Mar  1 05:03:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004950 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:38 np0005634532 podman[260136]: 2026-03-01 10:03:38.141174759 +0000 UTC m=+0.448587152 container remove 49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elastic_kare, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True)
Mar  1 05:03:38 np0005634532 systemd[1]: libpod-conmon-49c5cfade48e107da0d892f985b43f04512d673ef08565b40ffcaf12c1d3a57c.scope: Deactivated successfully.
Mar  1 05:03:38 np0005634532 nova_compute[257049]: 2026-03-01 10:03:38.230 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:38 np0005634532 nova_compute[257049]: 2026-03-01 10:03:38.231 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:03:38 np0005634532 nova_compute[257049]: 2026-03-01 10:03:38.231 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:03:38 np0005634532 nova_compute[257049]: 2026-03-01 10:03:38.250 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:03:38 np0005634532 nova_compute[257049]: 2026-03-01 10:03:38.251 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:03:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:38.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.657979561 +0000 UTC m=+0.032249816 container create 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:03:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_25] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003020 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:38 np0005634532 systemd[1]: Started libpod-conmon-9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d.scope.
Mar  1 05:03:38 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.727431894 +0000 UTC m=+0.101702149 container init 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.732188101 +0000 UTC m=+0.106458356 container start 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 05:03:38 np0005634532 vigorous_chatterjee[260282]: 167 167
Mar  1 05:03:38 np0005634532 systemd[1]: libpod-9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d.scope: Deactivated successfully.
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.737331318 +0000 UTC m=+0.111601593 container attach 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.737948853 +0000 UTC m=+0.112219108 container died 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.643217718 +0000 UTC m=+0.017487993 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-585ec4fe41787a26dcdb6be0d2e308c9e983680f61dfe11e93502f02a25a37bc-merged.mount: Deactivated successfully.
Mar  1 05:03:38 np0005634532 podman[260265]: 2026-03-01 10:03:38.774263949 +0000 UTC m=+0.148534204 container remove 9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_chatterjee, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 05:03:38 np0005634532 systemd[1]: libpod-conmon-9459474986f525652fe5fc8373b9754644c16c541741207461c9bd565b17056d.scope: Deactivated successfully.
Mar  1 05:03:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v627: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Mar  1 05:03:38 np0005634532 podman[260307]: 2026-03-01 10:03:38.898390339 +0000 UTC m=+0.034011169 container create d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:03:38 np0005634532 systemd[1]: Started libpod-conmon-d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f.scope.
Mar  1 05:03:38 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:03:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce7b8457fb320ecd191b9dd033030eb4b7fd46bcf945c64abf79675c42f4000/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce7b8457fb320ecd191b9dd033030eb4b7fd46bcf945c64abf79675c42f4000/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce7b8457fb320ecd191b9dd033030eb4b7fd46bcf945c64abf79675c42f4000/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce7b8457fb320ecd191b9dd033030eb4b7fd46bcf945c64abf79675c42f4000/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:03:38 np0005634532 podman[260307]: 2026-03-01 10:03:38.972143788 +0000 UTC m=+0.107764648 container init d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:03:38 np0005634532 podman[260307]: 2026-03-01 10:03:38.977470349 +0000 UTC m=+0.113091179 container start d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:03:38 np0005634532 podman[260307]: 2026-03-01 10:03:38.980422562 +0000 UTC m=+0.116043412 container attach d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:03:38 np0005634532 podman[260307]: 2026-03-01 10:03:38.885137312 +0000 UTC m=+0.020758162 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:03:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:39.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:39 np0005634532 lvm[260399]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:03:39 np0005634532 lvm[260399]: VG ceph_vg0 finished
Mar  1 05:03:39 np0005634532 cranky_edison[260325]: {}
Mar  1 05:03:39 np0005634532 systemd[1]: libpod-d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f.scope: Deactivated successfully.
Mar  1 05:03:39 np0005634532 podman[260307]: 2026-03-01 10:03:39.661267738 +0000 UTC m=+0.796888578 container died d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:03:39 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1ce7b8457fb320ecd191b9dd033030eb4b7fd46bcf945c64abf79675c42f4000-merged.mount: Deactivated successfully.
Mar  1 05:03:39 np0005634532 podman[260307]: 2026-03-01 10:03:39.705697614 +0000 UTC m=+0.841318454 container remove d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=cranky_edison, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:03:39 np0005634532 systemd[1]: libpod-conmon-d5b12de4185b323c7b07e0853d6d2b5c0fc3aae5b1982eb89c3b4ea1150cb26f.scope: Deactivated successfully.
Mar  1 05:03:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:03:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:03:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:03:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_32] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:40.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868004970 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v628: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Mar  1 05:03:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:41.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003040 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:42.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v629: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Mar  1 05:03:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:43.379 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003060 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:44.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v630: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 101 KiB/s rd, 0 B/s wr, 168 op/s
Mar  1 05:03:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:45.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:46.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003080 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v631: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Mar  1 05:03:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:47] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:47] "GET /metrics HTTP/1.1" 200 48344 "" "Prometheus/2.51.0"
Mar  1 05:03:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:47.184Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:03:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:47.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:03:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:03:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:03:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:48.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v632: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 100 KiB/s rd, 0 B/s wr, 166 op/s
Mar  1 05:03:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:03:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:49.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:03:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:50.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v633: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:51.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:52.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003100 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v634: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:53.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:54.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v635: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:03:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003120 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:56.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:03:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v636: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:57] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Mar  1 05:03:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:03:57] "GET /metrics HTTP/1.1" 200 48341 "" "Prometheus/2.51.0"
Mar  1 05:03:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:03:57.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:03:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:57.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003140 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:03:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213643412' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:03:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:03:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3213643412' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:03:58 np0005634532 podman[260490]: 2026-03-01 10:03:58.416596846 +0000 UTC m=+0.094979993 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:03:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:03:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:03:58.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:03:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:03:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v637: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:03:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:03:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:03:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:03:59.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:03:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:03:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:00.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003160 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v638: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:04:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:01.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:04:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:02 np0005634532 podman[260523]: 2026-03-01 10:04:02.396973839 +0000 UTC m=+0.081386178 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 05:04:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:04:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:04:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:02.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001130 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v639: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:03.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003180 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000bd50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:04.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v640: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:04:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:05.401 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640031a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:06.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v641: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:04:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:07] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:04:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:07.186Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:04:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:07.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001170 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:08.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640031c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v642: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:09.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_34] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001190 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v643: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100411 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:04:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:11.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_33] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100412 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:04:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:12.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v644: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:04:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:13.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:14.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v645: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 05:04:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:15.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:16.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003200 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v646: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:04:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:17] "GET /metrics HTTP/1.1" 200 48343 "" "Prometheus/2.51.0"
Mar  1 05:04:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:17.187Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:04:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:17.412 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:04:17
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['backups', '.nfs', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'cephfs.cephfs.data', 'volumes']
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:04:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:04:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:04:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:04:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:04:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:18.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:04:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v647: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Mar  1 05:04:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:04:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:19.414 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003220 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:20.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v648: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Mar  1 05:04:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:21.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:04:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:04:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:04:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:04:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:22.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:04:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v649: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 170 B/s wr, 1 op/s
Mar  1 05:04:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:23.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:04:23.877 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:04:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:04:23.877 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:04:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:04:23.877 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:04:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:04:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:24.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v650: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.9 KiB/s rd, 682 B/s wr, 2 op/s
Mar  1 05:04:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:25.420 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:26.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v651: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 682 B/s wr, 2 op/s
Mar  1 05:04:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:27] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Mar  1 05:04:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:27] "GET /metrics HTTP/1.1" 200 48337 "" "Prometheus/2.51.0"
Mar  1 05:04:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:27.188Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:04:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:04:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:04:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:27.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:04:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:28.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v652: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 3.1 KiB/s rd, 1.1 KiB/s wr, 4 op/s
Mar  1 05:04:29 np0005634532 podman[260596]: 2026-03-01 10:04:29.375973486 +0000 UTC m=+0.070934476 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 05:04:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:29.424 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864003310 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:30.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v653: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 05:04:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100431 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:04:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:31.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864004f50 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100432 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:04:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:04:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:04:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:32.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v654: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.4 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 05:04:32 np0005634532 nova_compute[257049]: 2026-03-01 10:04:32.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:33 np0005634532 podman[260628]: 2026-03-01 10:04:33.368573412 +0000 UTC m=+0.058669614 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:33.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=404 latency=0.002000049s ======
Mar  1 05:04:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:33.527 +0000] "GET /info HTTP/1.1" 404 152 - "python-urllib3/1.26.5" - latency=0.002000049s
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000050s ======
Mar  1 05:04:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - - [01/Mar/2026:10:04:33.540 +0000] "GET /swift/healthcheck HTTP/1.1" 200 0 - "python-urllib3/1.26.5" - latency=0.002000050s
Mar  1 05:04:33 np0005634532 nova_compute[257049]: 2026-03-01 10:04:33.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:34.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v655: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 938 B/s wr, 3 op/s
Mar  1 05:04:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:35.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:36.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v656: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 426 B/s wr, 1 op/s
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.989 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.990 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.990 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.991 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.991 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:04:36 np0005634532 nova_compute[257049]: 2026-03-01 10:04:36.991 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.008 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.009 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.009 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.009 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.009 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:04:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:37] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 05:04:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:37] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 05:04:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:37.189Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:04:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:37.190Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:04:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:04:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1743102131' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.410 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:04:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:37.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.548 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.549 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4943MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.549 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.549 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.601 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.602 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:04:37 np0005634532 nova_compute[257049]: 2026-03-01 10:04:37.623 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:04:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:04:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/511728326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:04:38 np0005634532 nova_compute[257049]: 2026-03-01 10:04:38.034 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.411s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:04:38 np0005634532 nova_compute[257049]: 2026-03-01 10:04:38.039 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:04:38 np0005634532 nova_compute[257049]: 2026-03-01 10:04:38.068 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:04:38 np0005634532 nova_compute[257049]: 2026-03-01 10:04:38.069 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:04:38 np0005634532 nova_compute[257049]: 2026-03-01 10:04:38.069 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.520s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:04:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Mar  1 05:04:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Mar  1 05:04:38 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Mar  1 05:04:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_28] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:38.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v658: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 102 B/s wr, 0 op/s
Mar  1 05:04:39 np0005634532 nova_compute[257049]: 2026-03-01 10:04:39.056 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:39 np0005634532 nova_compute[257049]: 2026-03-01 10:04:39.070 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:04:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Mar  1 05:04:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Mar  1 05:04:39 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Mar  1 05:04:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:39.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Mar  1 05:04:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:40.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:04:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:04:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v661: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Mar  1 05:04:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:04:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.324550423 +0000 UTC m=+0.037021702 container create 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:04:41 np0005634532 systemd[1]: Started libpod-conmon-964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15.scope.
Mar  1 05:04:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.383045872 +0000 UTC m=+0.095517131 container init 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.388709851 +0000 UTC m=+0.101181130 container start 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.39233378 +0000 UTC m=+0.104805119 container attach 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:41 np0005634532 boring_cannon[260915]: 167 167
Mar  1 05:04:41 np0005634532 systemd[1]: libpod-964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15.scope: Deactivated successfully.
Mar  1 05:04:41 np0005634532 conmon[260915]: conmon 964586a49a9e9e156f5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15.scope/container/memory.events
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.396146884 +0000 UTC m=+0.108618123 container died 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.308821956 +0000 UTC m=+0.021293215 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8eb1acbb5b62f33fea9014fe3da65f68217c692bf8af60cb2fbee0aa999ce66d-merged.mount: Deactivated successfully.
Mar  1 05:04:41 np0005634532 podman[260899]: 2026-03-01 10:04:41.431995176 +0000 UTC m=+0.144466415 container remove 964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=boring_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:04:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:41.436 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:41 np0005634532 systemd[1]: libpod-conmon-964586a49a9e9e156f5e8d69b04afd4d275a95016f39cdc5e15a05d2bc5e1c15.scope: Deactivated successfully.
Mar  1 05:04:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.548904482 +0000 UTC m=+0.044728132 container create 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 05:04:41 np0005634532 systemd[1]: Started libpod-conmon-7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f.scope.
Mar  1 05:04:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.52363309 +0000 UTC m=+0.019456730 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.648954433 +0000 UTC m=+0.144778083 container init 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.668682398 +0000 UTC m=+0.164506018 container start 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.675791283 +0000 UTC m=+0.171614903 container attach 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:04:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:41 np0005634532 intelligent_keldysh[260957]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:04:41 np0005634532 intelligent_keldysh[260957]: --> All data devices are unavailable
Mar  1 05:04:41 np0005634532 systemd[1]: libpod-7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f.scope: Deactivated successfully.
Mar  1 05:04:41 np0005634532 podman[260941]: 2026-03-01 10:04:41.999509246 +0000 UTC m=+0.495332886 container died 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-435da73b20b2f643110a2b1f3e8e9585d7908b0b2128d206114e40c7138f8986-merged.mount: Deactivated successfully.
Mar  1 05:04:42 np0005634532 podman[260941]: 2026-03-01 10:04:42.05412858 +0000 UTC m=+0.549952220 container remove 7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:04:42 np0005634532 systemd[1]: libpod-conmon-7210bddc2fa325af30e703ecb36ec00f5c02b9312ebadea86ee49603caad9b8f.scope: Deactivated successfully.
Mar  1 05:04:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Mar  1 05:04:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Mar  1 05:04:42 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Mar  1 05:04:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:42.704 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.72732949 +0000 UTC m=+0.046571886 container create 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 05:04:42 np0005634532 systemd[1]: Started libpod-conmon-8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44.scope.
Mar  1 05:04:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.704987401 +0000 UTC m=+0.024229757 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.804864778 +0000 UTC m=+0.124107164 container init 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.813157442 +0000 UTC m=+0.132399768 container start 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:04:42 np0005634532 pensive_gates[261095]: 167 167
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.81753919 +0000 UTC m=+0.136781486 container attach 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:42 np0005634532 systemd[1]: libpod-8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44.scope: Deactivated successfully.
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.818247617 +0000 UTC m=+0.137489903 container died 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-81319a7220bf447e57daea6fe3af40e8509da25d7597eff7a006b8b53360246b-merged.mount: Deactivated successfully.
Mar  1 05:04:42 np0005634532 podman[261079]: 2026-03-01 10:04:42.863051229 +0000 UTC m=+0.182293545 container remove 8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=pensive_gates, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:04:42 np0005634532 systemd[1]: libpod-conmon-8a279d2e4f8c1476e8f63237b701ff77a11d7c7a86a8c3fdca53960b176aea44.scope: Deactivated successfully.
Mar  1 05:04:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v663: 353 pgs: 353 active+clean; 458 KiB data, 153 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 213 B/s wr, 0 op/s
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.041703204 +0000 UTC m=+0.044301231 container create 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Mar  1 05:04:43 np0005634532 systemd[1]: Started libpod-conmon-18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3.scope.
Mar  1 05:04:43 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc42d18064c67b402ca6e5cbeea25c97379a75261b5fd8bdc8b91982b8eb9b5b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc42d18064c67b402ca6e5cbeea25c97379a75261b5fd8bdc8b91982b8eb9b5b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc42d18064c67b402ca6e5cbeea25c97379a75261b5fd8bdc8b91982b8eb9b5b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc42d18064c67b402ca6e5cbeea25c97379a75261b5fd8bdc8b91982b8eb9b5b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.01959498 +0000 UTC m=+0.022193037 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.124085501 +0000 UTC m=+0.126683538 container init 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.130005996 +0000 UTC m=+0.132604033 container start 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.133245466 +0000 UTC m=+0.135843503 container attach 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]: {
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:    "0": [
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:        {
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "devices": [
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "/dev/loop3"
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            ],
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "lv_name": "ceph_lv0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "lv_size": "21470642176",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "name": "ceph_lv0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "tags": {
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.cluster_name": "ceph",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.crush_device_class": "",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.encrypted": "0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.osd_id": "0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.type": "block",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.vdo": "0",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:                "ceph.with_tpm": "0"
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            },
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "type": "block",
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:            "vg_name": "ceph_vg0"
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:        }
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]:    ]
Mar  1 05:04:43 np0005634532 quizzical_cori[261135]: }
Mar  1 05:04:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:43.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:43 np0005634532 systemd[1]: libpod-18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3.scope: Deactivated successfully.
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.446730157 +0000 UTC m=+0.449328174 container died 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:04:43 np0005634532 systemd[1]: var-lib-containers-storage-overlay-dc42d18064c67b402ca6e5cbeea25c97379a75261b5fd8bdc8b91982b8eb9b5b-merged.mount: Deactivated successfully.
Mar  1 05:04:43 np0005634532 podman[261119]: 2026-03-01 10:04:43.490144985 +0000 UTC m=+0.492743042 container remove 18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=quizzical_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:04:43 np0005634532 systemd[1]: libpod-conmon-18b8f6ebbebb972e4a2a361709d74be229c7878a2d3446506ab7a90f0f392cb3.scope: Deactivated successfully.
Mar  1 05:04:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.033407488 +0000 UTC m=+0.045303625 container create b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:04:44 np0005634532 systemd[1]: Started libpod-conmon-b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9.scope.
Mar  1 05:04:44 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.016836611 +0000 UTC m=+0.028732758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.113392436 +0000 UTC m=+0.125288593 container init b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.125407222 +0000 UTC m=+0.137303359 container start b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 05:04:44 np0005634532 infallible_greider[261267]: 167 167
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.129069882 +0000 UTC m=+0.140966019 container attach b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:04:44 np0005634532 systemd[1]: libpod-b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9.scope: Deactivated successfully.
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.129847161 +0000 UTC m=+0.141743308 container died b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True)
Mar  1 05:04:44 np0005634532 systemd[1]: var-lib-containers-storage-overlay-68845b79b03b94e43e874c208ac3f859fd3645bd539f5ebdcf00c0a36c08f639-merged.mount: Deactivated successfully.
Mar  1 05:04:44 np0005634532 podman[261250]: 2026-03-01 10:04:44.173761861 +0000 UTC m=+0.185657998 container remove b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_greider, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:04:44 np0005634532 systemd[1]: libpod-conmon-b6ba60b8c1d6ff4d6cc4e8da3fc3fa3e9f4017367a5e6c900bd3e6f88df82fc9.scope: Deactivated successfully.
Mar  1 05:04:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:44 np0005634532 podman[261292]: 2026-03-01 10:04:44.362231657 +0000 UTC m=+0.068139987 container create 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:04:44 np0005634532 systemd[1]: Started libpod-conmon-63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026.scope.
Mar  1 05:04:44 np0005634532 podman[261292]: 2026-03-01 10:04:44.329298087 +0000 UTC m=+0.035206427 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:04:44 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:04:44 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bbeffba8c08f463675ae8d9202de3964176f8907f728cea9be1a54242063b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:44 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bbeffba8c08f463675ae8d9202de3964176f8907f728cea9be1a54242063b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:44 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bbeffba8c08f463675ae8d9202de3964176f8907f728cea9be1a54242063b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:44 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5bbeffba8c08f463675ae8d9202de3964176f8907f728cea9be1a54242063b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:04:44 np0005634532 podman[261292]: 2026-03-01 10:04:44.47251209 +0000 UTC m=+0.178420400 container init 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:04:44 np0005634532 podman[261292]: 2026-03-01 10:04:44.485239953 +0000 UTC m=+0.191148243 container start 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:04:44 np0005634532 podman[261292]: 2026-03-01 10:04:44.489358175 +0000 UTC m=+0.195266465 container attach 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:04:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:44.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v664: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 6.8 MiB/s wr, 63 op/s
Mar  1 05:04:45 np0005634532 lvm[261382]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:04:45 np0005634532 lvm[261382]: VG ceph_vg0 finished
Mar  1 05:04:45 np0005634532 angry_dubinsky[261308]: {}
Mar  1 05:04:45 np0005634532 systemd[1]: libpod-63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026.scope: Deactivated successfully.
Mar  1 05:04:45 np0005634532 podman[261292]: 2026-03-01 10:04:45.232363733 +0000 UTC m=+0.938272063 container died 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:04:45 np0005634532 systemd[1]: libpod-63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026.scope: Consumed 1.082s CPU time.
Mar  1 05:04:45 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d5bbeffba8c08f463675ae8d9202de3964176f8907f728cea9be1a54242063b8-merged.mount: Deactivated successfully.
Mar  1 05:04:45 np0005634532 podman[261292]: 2026-03-01 10:04:45.294894262 +0000 UTC m=+1.000802592 container remove 63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:04:45 np0005634532 systemd[1]: libpod-conmon-63886b731522fce953d4fbfd8f8552efe84b52dd3e131a3d7647fd42efdaa026.scope: Deactivated successfully.
Mar  1 05:04:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:04:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:04:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000050s ======
Mar  1 05:04:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:45.439 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Mar  1 05:04:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:04:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:46.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e141 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Mar  1 05:04:46 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Mar  1 05:04:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v666: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 6.1 MiB/s wr, 56 op/s
Mar  1 05:04:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:47] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:47] "GET /metrics HTTP/1.1" 200 48345 "" "Prometheus/2.51.0"
Mar  1 05:04:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:47.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:04:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:47.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:04:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:47.191Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:04:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:47.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:04:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:04:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:04:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:48.713 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v667: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Mar  1 05:04:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:49.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:50.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v668: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.7 MiB/s wr, 43 op/s
Mar  1 05:04:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:51.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:52.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v669: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 4.1 MiB/s wr, 38 op/s
Mar  1 05:04:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:53.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:54.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v670: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 307 B/s rd, 0 op/s
Mar  1 05:04:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:55.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:56.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:04:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v671: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 304 B/s rd, 0 op/s
Mar  1 05:04:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:57] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Mar  1 05:04:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:04:57] "GET /metrics HTTP/1.1" 200 48396 "" "Prometheus/2.51.0"
Mar  1 05:04:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:57.191Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:04:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:57.192Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:04:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:04:57.192Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:04:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:57.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:04:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2369656973' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:04:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:04:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2369656973' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:04:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff86000c670 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:04:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:04:58.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:04:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_26] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:04:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v672: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 05:04:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:04:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:04:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:04:59.453 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:04:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:04:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:00 np0005634532 podman[261467]: 2026-03-01 10:05:00.387701016 +0000 UTC m=+0.074156735 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS)
Mar  1 05:05:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:05:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:00.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:05:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v673: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:01 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:01.119 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:05:01 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:01.120 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:05:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:01.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:05:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:05:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:02.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v674: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:03 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:03.123 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:05:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:03.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:04 np0005634532 podman[261498]: 2026-03-01 10:05:04.364225427 +0000 UTC m=+0.050039472 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Mar  1 05:05:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:04.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v675: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 05:05:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:05.459 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:06.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v676: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Mar  1 05:05:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:07] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Mar  1 05:05:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:07.192Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:05:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:07.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:05:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:05:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:07.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:05:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8680028b0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:08.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001240 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v677: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 05:05:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:09.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff858004590 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:10.748 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_36] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v678: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:11.465 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:12.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v679: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:13.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:14.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8640013d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v680: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 05:05:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:15.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:16.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v681: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:17] "GET /metrics HTTP/1.1" 200 48399 "" "Prometheus/2.51.0"
Mar  1 05:05:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:17.193Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:05:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:17.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:05:17
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'images', '.nfs', 'vms', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.meta', 'backups']
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:05:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:05:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:05:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:05:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:18 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:05:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:18.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:05:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v682: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 05:05:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:19.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:19 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 05:05:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:20 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:20 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001260 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:20.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v683: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:21.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:22 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:22.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v684: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:05:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:23.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff884001280 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:23.878 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:05:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:23.878 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:05:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:05:23.879 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:05:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Mar  1 05:05:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:24 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Mar  1 05:05:24 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Mar  1 05:05:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:24 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:05:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:24.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:05:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v686: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 8 op/s
Mar  1 05:05:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Mar  1 05:05:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Mar  1 05:05:25 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Mar  1 05:05:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:25.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012a0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:26 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:26 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:26.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v688: 353 pgs: 353 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 10 op/s
Mar  1 05:05:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:27] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Mar  1 05:05:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:27] "GET /metrics HTTP/1.1" 200 48403 "" "Prometheus/2.51.0"
Mar  1 05:05:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:27.194Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:05:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:27.194Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:05:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:27.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:28 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012c0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:28.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v689: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Mar  1 05:05:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:29.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:29 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_38] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8880025d0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:30 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:30.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v690: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 51 op/s
Mar  1 05:05:31 np0005634532 podman[261573]: 2026-03-01 10:05:31.388480034 +0000 UTC m=+0.077838955 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:05:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:31.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:31 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Mar  1 05:05:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Mar  1 05:05:31 np0005634532 ceph-mon[75825]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Mar  1 05:05:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:05:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:05:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:32 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:05:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:32.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:05:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v692: 353 pgs: 353 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 MiB/s wr, 40 op/s
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.978 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.992 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.993 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:32 np0005634532 nova_compute[257049]: 2026-03-01 10:05:32.994 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Mar  1 05:05:33 np0005634532 nova_compute[257049]: 2026-03-01 10:05:33.013 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:33.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:33 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:34 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:34.781 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v693: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 125 op/s
Mar  1 05:05:35 np0005634532 nova_compute[257049]: 2026-03-01 10:05:35.022 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.084837) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535084885, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 2152, "num_deletes": 251, "total_data_size": 4196247, "memory_usage": 4264832, "flush_reason": "Manual Compaction"}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535102335, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 4093471, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19949, "largest_seqno": 22100, "table_properties": {"data_size": 4083736, "index_size": 6165, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2501, "raw_key_size": 19905, "raw_average_key_size": 20, "raw_value_size": 4064249, "raw_average_value_size": 4147, "num_data_blocks": 269, "num_entries": 980, "num_filter_entries": 980, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359330, "oldest_key_time": 1772359330, "file_creation_time": 1772359535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 17552 microseconds, and 7366 cpu microseconds.
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.102387) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 4093471 bytes OK
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.102411) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.103745) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.103758) EVENT_LOG_v1 {"time_micros": 1772359535103753, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.103774) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 4187544, prev total WAL file size 4187544, number of live WAL files 2.
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.104563) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(3997KB)], [44(12MB)]
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535104625, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 17320966, "oldest_snapshot_seqno": -1}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 5415 keys, 15107348 bytes, temperature: kUnknown
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535188123, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 15107348, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15068848, "index_size": 23862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13573, "raw_key_size": 136651, "raw_average_key_size": 25, "raw_value_size": 14968505, "raw_average_value_size": 2764, "num_data_blocks": 984, "num_entries": 5415, "num_filter_entries": 5415, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359535, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.188357) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 15107348 bytes
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.190250) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 207.3 rd, 180.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.9, 12.6 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 5939, records dropped: 524 output_compression: NoCompression
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.190301) EVENT_LOG_v1 {"time_micros": 1772359535190284, "job": 22, "event": "compaction_finished", "compaction_time_micros": 83563, "compaction_time_cpu_micros": 29723, "output_level": 6, "num_output_files": 1, "total_output_size": 15107348, "num_input_records": 5939, "num_output_records": 5415, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535190895, "job": 22, "event": "table_file_deletion", "file_number": 46}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359535192197, "job": 22, "event": "table_file_deletion", "file_number": 44}
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.104459) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.192256) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.192261) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.192263) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.192264) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:05:35.192266) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:05:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100535 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:05:35 np0005634532 podman[261606]: 2026-03-01 10:05:35.353662567 +0000 UTC m=+0.047129681 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Mar  1 05:05:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:35.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:35 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:36 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:36 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:36.783 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v694: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 121 op/s
Mar  1 05:05:36 np0005634532 nova_compute[257049]: 2026-03-01 10:05:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:36 np0005634532 nova_compute[257049]: 2026-03-01 10:05:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:36 np0005634532 nova_compute[257049]: 2026-03-01 10:05:36.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:05:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:05:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:05:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:37.197Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:05:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:37.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:37 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff860001a30 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.998 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:05:37 np0005634532 nova_compute[257049]: 2026-03-01 10:05:37.999 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:05:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:05:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2671157203' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.435 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:05:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=cleanup t=2026-03-01T10:05:38.562303967Z level=info msg="Completed cleanup jobs" duration=2.373668ms
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.584 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.586 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.586 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.586 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:05:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugins.update.checker t=2026-03-01T10:05:38.664761798Z level=info msg="Update check succeeded" duration=49.094037ms
Mar  1 05:05:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana.update.checker t=2026-03-01T10:05:38.672316514Z level=info msg="Update check succeeded" duration=42.852885ms
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.725 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.726 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:05:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:38 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:38.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.793 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.844 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.845 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.860 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.881 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:05:38 np0005634532 nova_compute[257049]: 2026-03-01 10:05:38.899 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:05:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v695: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Mar  1 05:05:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:05:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/435414926' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:05:39 np0005634532 nova_compute[257049]: 2026-03-01 10:05:39.342 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:05:39 np0005634532 nova_compute[257049]: 2026-03-01 10:05:39.346 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:05:39 np0005634532 nova_compute[257049]: 2026-03-01 10:05:39.374 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:05:39 np0005634532 nova_compute[257049]: 2026-03-01 10:05:39.377 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:05:39 np0005634532 nova_compute[257049]: 2026-03-01 10:05:39.377 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:05:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:39.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:39 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.374 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.375 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.375 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.376 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.400 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:05:40 np0005634532 nova_compute[257049]: 2026-03-01 10:05:40.402 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:05:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:40 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:40.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v696: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 88 op/s
Mar  1 05:05:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:41.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:41 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_40] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:42 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Mar  1 05:05:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:42 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:42.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v697: 353 pgs: 353 active+clean; 88 MiB data, 216 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 14 KiB/s wr, 79 op/s
Mar  1 05:05:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:43.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:43 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:05:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:44 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:44.794 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v698: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Mar  1 05:05:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:45.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:45 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:46 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:46 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:05:46 np0005634532 podman[261884]: 2026-03-01 10:05:46.776047521 +0000 UTC m=+0.038541939 container create 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 05:05:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:46.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:46 np0005634532 systemd[1]: Started libpod-conmon-4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9.scope.
Mar  1 05:05:46 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:46 np0005634532 podman[261884]: 2026-03-01 10:05:46.760692453 +0000 UTC m=+0.023186891 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v699: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 209 KiB/s rd, 2.1 MiB/s wr, 59 op/s
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:05:47 np0005634532 podman[261884]: 2026-03-01 10:05:47.027771153 +0000 UTC m=+0.290265581 container init 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:05:47 np0005634532 podman[261884]: 2026-03-01 10:05:47.03496169 +0000 UTC m=+0.297456098 container start 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:05:47 np0005634532 wizardly_agnesi[261900]: 167 167
Mar  1 05:05:47 np0005634532 systemd[1]: libpod-4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9.scope: Deactivated successfully.
Mar  1 05:05:47 np0005634532 podman[261884]: 2026-03-01 10:05:47.041018489 +0000 UTC m=+0.303512897 container attach 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:05:47 np0005634532 conmon[261900]: conmon 4399229df57fc00e042e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9.scope/container/memory.events
Mar  1 05:05:47 np0005634532 podman[261884]: 2026-03-01 10:05:47.042242679 +0000 UTC m=+0.304737087 container died 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:05:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-056065ed11f0d12e963039c0e749739abf43cba5802dbc7061cad6a09aad7cd4-merged.mount: Deactivated successfully.
Mar  1 05:05:47 np0005634532 podman[261884]: 2026-03-01 10:05:47.077478726 +0000 UTC m=+0.339973154 container remove 4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_agnesi, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:05:47 np0005634532 systemd[1]: libpod-conmon-4399229df57fc00e042e5afac7d37c81f520c7ab6fc693f73c9afe341fb7bca9.scope: Deactivated successfully.
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:47.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:47.198Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:47.198Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.239397939 +0000 UTC m=+0.089745369 container create 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.169269364 +0000 UTC m=+0.019616804 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:47 np0005634532 systemd[1]: Started libpod-conmon-2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72.scope.
Mar  1 05:05:47 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.332414587 +0000 UTC m=+0.182762057 container init 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.339977893 +0000 UTC m=+0.190325323 container start 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.34431816 +0000 UTC m=+0.194665600 container attach 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 05:05:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:47.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:05:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:47 np0005634532 sharp_wescoff[261943]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:05:47 np0005634532 sharp_wescoff[261943]: --> All data devices are unavailable
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:05:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:05:47 np0005634532 systemd[1]: libpod-2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72.scope: Deactivated successfully.
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.682955661 +0000 UTC m=+0.533303281 container died 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:05:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2bbc0b1f66569ec08cbc9c4326e540b6d3cde17f2599fddfdfed0378d73dc1a7-merged.mount: Deactivated successfully.
Mar  1 05:05:47 np0005634532 podman[261926]: 2026-03-01 10:05:47.731156846 +0000 UTC m=+0.581504296 container remove 2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_wescoff, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:05:47 np0005634532 systemd[1]: libpod-conmon-2012cc5a3ddade5835b7a0b5d342b911b75dfca7b783dda9d391f056aa0f9c72.scope: Deactivated successfully.
Mar  1 05:05:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:47 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.220065172 +0000 UTC m=+0.041893201 container create 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:05:48 np0005634532 systemd[1]: Started libpod-conmon-05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed.scope.
Mar  1 05:05:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:48 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.297924857 +0000 UTC m=+0.119752886 container init 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.205491614 +0000 UTC m=+0.027319623 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.303816112 +0000 UTC m=+0.125644091 container start 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.306729004 +0000 UTC m=+0.128557053 container attach 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:05:48 np0005634532 intelligent_hermann[262076]: 167 167
Mar  1 05:05:48 np0005634532 systemd[1]: libpod-05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed.scope: Deactivated successfully.
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.309063521 +0000 UTC m=+0.130891540 container died 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 05:05:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9d2b8b7e3a7bb34b6f8e36043465822ea31535fdfb096c77b8ff7ace712d3375-merged.mount: Deactivated successfully.
Mar  1 05:05:48 np0005634532 podman[262060]: 2026-03-01 10:05:48.340140966 +0000 UTC m=+0.161968985 container remove 05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_hermann, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:05:48 np0005634532 systemd[1]: libpod-conmon-05b084f944a63651c8f8c46ef2928fb28e69f9d22d8c40d783c4f72d4bb500ed.scope: Deactivated successfully.
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.475718231 +0000 UTC m=+0.040863196 container create 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:05:48 np0005634532 systemd[1]: Started libpod-conmon-2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255.scope.
Mar  1 05:05:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.457272957 +0000 UTC m=+0.022417962 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5e5717f866d515f87075651ab7f5e1a94b938eeb8534d4015565829b920d03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5e5717f866d515f87075651ab7f5e1a94b938eeb8534d4015565829b920d03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5e5717f866d515f87075651ab7f5e1a94b938eeb8534d4015565829b920d03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/be5e5717f866d515f87075651ab7f5e1a94b938eeb8534d4015565829b920d03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.57037174 +0000 UTC m=+0.135516755 container init 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.575065255 +0000 UTC m=+0.140210210 container start 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.578397367 +0000 UTC m=+0.143542362 container attach 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]: {
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:    "0": [
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:        {
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "devices": [
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "/dev/loop3"
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            ],
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "lv_name": "ceph_lv0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "lv_size": "21470642176",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "name": "ceph_lv0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "tags": {
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.cluster_name": "ceph",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.crush_device_class": "",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.encrypted": "0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.osd_id": "0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.type": "block",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.vdo": "0",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:                "ceph.with_tpm": "0"
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            },
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "type": "block",
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:            "vg_name": "ceph_vg0"
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:        }
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]:    ]
Mar  1 05:05:48 np0005634532 vigilant_archimedes[262118]: }
Mar  1 05:05:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:48.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:48 np0005634532 systemd[1]: libpod-2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255.scope: Deactivated successfully.
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.824297566 +0000 UTC m=+0.389442531 container died 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:05:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-be5e5717f866d515f87075651ab7f5e1a94b938eeb8534d4015565829b920d03-merged.mount: Deactivated successfully.
Mar  1 05:05:48 np0005634532 podman[262101]: 2026-03-01 10:05:48.867265353 +0000 UTC m=+0.432410308 container remove 2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:05:48 np0005634532 systemd[1]: libpod-conmon-2bd82e8b52396c9422c3fec6627de6b1568a9537d844a4f769bba3c4a2f47255.scope: Deactivated successfully.
Mar  1 05:05:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v700: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.37162467 +0000 UTC m=+0.037105294 container create d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Mar  1 05:05:49 np0005634532 systemd[1]: Started libpod-conmon-d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1.scope.
Mar  1 05:05:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.355168815 +0000 UTC m=+0.020649489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.455246267 +0000 UTC m=+0.120726921 container init d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.462644239 +0000 UTC m=+0.128124853 container start d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:05:49 np0005634532 serene_benz[262247]: 167 167
Mar  1 05:05:49 np0005634532 systemd[1]: libpod-d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1.scope: Deactivated successfully.
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.466239028 +0000 UTC m=+0.131719692 container attach d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.466448233 +0000 UTC m=+0.131928857 container died d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 05:05:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-811b2b0d3a3313c3c36e5975e9cedb3765bdd46a60b11011390c50dde99f85af-merged.mount: Deactivated successfully.
Mar  1 05:05:49 np0005634532 podman[262231]: 2026-03-01 10:05:49.500101691 +0000 UTC m=+0.165582315 container remove d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=serene_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:05:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:49.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:49 np0005634532 systemd[1]: libpod-conmon-d4360c093281761f1ba5d93c2d7478525619d645acaf966ff04a3f96b76889a1.scope: Deactivated successfully.
Mar  1 05:05:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:49 np0005634532 podman[262270]: 2026-03-01 10:05:49.64967825 +0000 UTC m=+0.054884521 container create acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 05:05:49 np0005634532 systemd[1]: Started libpod-conmon-acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34.scope.
Mar  1 05:05:49 np0005634532 podman[262270]: 2026-03-01 10:05:49.619564599 +0000 UTC m=+0.024770940 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:05:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:05:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eab7aa51a2eded73b4d7362ac0b3ad84e34110f5d8bbf23a3f38dff7b1b7f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eab7aa51a2eded73b4d7362ac0b3ad84e34110f5d8bbf23a3f38dff7b1b7f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eab7aa51a2eded73b4d7362ac0b3ad84e34110f5d8bbf23a3f38dff7b1b7f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/614eab7aa51a2eded73b4d7362ac0b3ad84e34110f5d8bbf23a3f38dff7b1b7f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:05:49 np0005634532 podman[262270]: 2026-03-01 10:05:49.746628955 +0000 UTC m=+0.151835236 container init acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 05:05:49 np0005634532 podman[262270]: 2026-03-01 10:05:49.751835403 +0000 UTC m=+0.157041704 container start acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 05:05:49 np0005634532 podman[262270]: 2026-03-01 10:05:49.755537794 +0000 UTC m=+0.160744085 container attach acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:05:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:49 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:05:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:50 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:50 np0005634532 lvm[262363]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:05:50 np0005634532 lvm[262363]: VG ceph_vg0 finished
Mar  1 05:05:50 np0005634532 charming_curran[262287]: {}
Mar  1 05:05:50 np0005634532 systemd[1]: libpod-acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34.scope: Deactivated successfully.
Mar  1 05:05:50 np0005634532 podman[262270]: 2026-03-01 10:05:50.383483611 +0000 UTC m=+0.788689872 container died acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:05:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-614eab7aa51a2eded73b4d7362ac0b3ad84e34110f5d8bbf23a3f38dff7b1b7f-merged.mount: Deactivated successfully.
Mar  1 05:05:50 np0005634532 podman[262270]: 2026-03-01 10:05:50.426978571 +0000 UTC m=+0.832184832 container remove acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=charming_curran, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:05:50 np0005634532 systemd[1]: libpod-conmon-acfa29b9d7b12cd148211b7a08eb9103dfdebc7c14b45824cf303ead1cf81b34.scope: Deactivated successfully.
Mar  1 05:05:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:05:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:05:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:50.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v701: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:05:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:05:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:51.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:51 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:52 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:52 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:52.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v702: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:05:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:53.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:53 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:54 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:54.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v703: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 217 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:05:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:55.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100555 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:05:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:55 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:56 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:56 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:56.814 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:05:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v704: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 9.9 KiB/s wr, 3 op/s
Mar  1 05:05:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:05:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:05:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:05:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:05:57.200Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:05:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:57.510 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:57 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:05:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813963829' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:05:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:05:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1813963829' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:05:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:58 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:05:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:05:58.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:05:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v705: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 7.5 KiB/s rd, 13 KiB/s wr, 4 op/s
Mar  1 05:05:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:05:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:05:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:05:59.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:05:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff888004d70 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:05:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:05:59 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:00 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:00 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:00.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v706: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.4 KiB/s wr, 1 op/s
Mar  1 05:06:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:01.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_37] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:01 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.085 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.085 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.103 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.180 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.180 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.187 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.187 257053 INFO nova.compute.claims [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Mar  1 05:06:02 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:02 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.293 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:02 np0005634532 podman[262442]: 2026-03-01 10:06:02.436778155 +0000 UTC m=+0.117259835 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0)
Mar  1 05:06:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:06:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:06:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:06:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2537195344' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.714 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.721 257053 DEBUG nova.compute.provider_tree [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.740 257053 DEBUG nova.scheduler.client.report [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.765 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.766 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Mar  1 05:06:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:02.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.827 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.828 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.856 257053 INFO nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.887 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Mar  1 05:06:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v707: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 3.4 KiB/s wr, 1 op/s
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.983 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.986 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Mar  1 05:06:02 np0005634532 nova_compute[257049]: 2026-03-01 10:06:02.986 257053 INFO nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Creating image(s)#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.015 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.041 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.070 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.073 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "d41046c43044bf8997bc5f9ade85627ba841861d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.074 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.320 257053 WARNING oslo_policy.policy [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.321 257053 WARNING oslo_policy.policy [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.323 257053 DEBUG nova.policy [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '054b4e3fa290475c906614f7e45d128f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Mar  1 05:06:03 np0005634532 nova_compute[257049]: 2026-03-01 10:06:03.369 257053 DEBUG nova.virt.libvirt.imagebackend [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image locations are: [{'url': 'rbd://437b1e74-f995-5d64-af1d-257ce01d77ab/images/07f64171-cfd1-4482-a545-07063cf7c3f2/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://437b1e74-f995-5d64-af1d-257ce01d77ab/images/07f64171-cfd1-4482-a545-07063cf7c3f2/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Mar  1 05:06:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:03.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:03 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:04 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.404 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.451 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.part --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.452 257053 DEBUG nova.virt.images [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] 07f64171-cfd1-4482-a545-07063cf7c3f2 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.453 257053 DEBUG nova.privsep.utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.453 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.part /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.605 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.part /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.converted" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.609 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.648 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Successfully created port: 18710daa-8d5e-46b6-b666-18b4e461fca4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.655 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d.converted --force-share --output=json" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.656 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.582s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.681 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.684 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:04.825 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:04 np0005634532 nova_compute[257049]: 2026-03-01 10:06:04.935 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.251s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v708: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 8.4 KiB/s rd, 3.4 KiB/s wr, 4 op/s
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.015 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] resizing rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.123 257053 DEBUG nova.objects.instance [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'migration_context' on Instance uuid 40dfeea3-c0b1-49c0-959b-7a08ceb7035c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.140 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.140 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Ensure instance console log exists: /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.141 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.141 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:05 np0005634532 nova_compute[257049]: 2026-03-01 10:06:05.141 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:05.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:05 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:06 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:06 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:06 np0005634532 podman[262671]: 2026-03-01 10:06:06.342629496 +0000 UTC m=+0.039725009 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Mar  1 05:06:06 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:06.362 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:06:06 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:06.363 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:06:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:06.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.897 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Successfully updated port: 18710daa-8d5e-46b6-b666-18b4e461fca4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.911 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.912 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquired lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.912 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Mar  1 05:06:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v709: 353 pgs: 353 active+clean; 121 MiB data, 280 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 3.3 KiB/s wr, 3 op/s
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.992 257053 DEBUG nova.compute.manager [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-changed-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.993 257053 DEBUG nova.compute.manager [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Refreshing instance network info cache due to event network-changed-18710daa-8d5e-46b6-b666-18b4e461fca4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:06:06 np0005634532 nova_compute[257049]: 2026-03-01 10:06:06.993 257053 DEBUG oslo_concurrency.lockutils [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:06:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:06:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:07] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:06:07 np0005634532 nova_compute[257049]: 2026-03-01 10:06:07.068 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Mar  1 05:06:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:07.201Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:06:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:07.201Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:06:07 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:07.365 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:07.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:07 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:08 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:08.831 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v710: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.193 257053 DEBUG nova.network.neutron [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Updating instance_info_cache with network_info: [{"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.213 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Releasing lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.214 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Instance network_info: |[{"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.214 257053 DEBUG oslo_concurrency.lockutils [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.214 257053 DEBUG nova.network.neutron [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Refreshing network info cache for port 18710daa-8d5e-46b6-b666-18b4e461fca4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.220 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Start _get_guest_xml network_info=[{"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'image_id': '07f64171-cfd1-4482-a545-07063cf7c3f2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.224 257053 WARNING nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.231 257053 DEBUG nova.virt.libvirt.host [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.231 257053 DEBUG nova.virt.libvirt.host [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.235 257053 DEBUG nova.virt.libvirt.host [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.235 257053 DEBUG nova.virt.libvirt.host [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.236 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.236 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-03-01T10:04:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='47cd4c38-4c43-414c-bd62-23cc1dc66486',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.237 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.237 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.237 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.238 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.238 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.238 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.239 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.239 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.239 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.240 257053 DEBUG nova.virt.hardware [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.245 257053 DEBUG nova.privsep.utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.246 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:09.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:06:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3818330970' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.673 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.703 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:09 np0005634532 nova_compute[257049]: 2026-03-01 10:06:09.707 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:09 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:06:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2036484175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.127 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.128 257053 DEBUG nova.virt.libvirt.vif [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1526607679',display_name='tempest-TestNetworkBasicOps-server-1526607679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1526607679',id=2,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPOo6IiOdgl08MDX0YTwpAsCaTPrYIkzkU1Ftv4CN2J5/2ENMci/xJ9cEgaU2o/8KJxbYBsQwJafBOlW5S2iIz7UCJ7gVSyLn/I+QptJTMWQZLaNk8wlBiSePC39pcVr5w==',key_name='tempest-TestNetworkBasicOps-1335120467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-9w4oienf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:06:02Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=40dfeea3-c0b1-49c0-959b-7a08ceb7035c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.129 257053 DEBUG nova.network.os_vif_util [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.130 257053 DEBUG nova.network.os_vif_util [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.131 257053 DEBUG nova.objects.instance [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 40dfeea3-c0b1-49c0-959b-7a08ceb7035c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.145 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] End _get_guest_xml xml=<domain type="kvm">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <uuid>40dfeea3-c0b1-49c0-959b-7a08ceb7035c</uuid>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <name>instance-00000002</name>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <memory>131072</memory>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <vcpu>1</vcpu>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <metadata>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:name>tempest-TestNetworkBasicOps-server-1526607679</nova:name>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:creationTime>2026-03-01 10:06:09</nova:creationTime>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:flavor name="m1.nano">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:memory>128</nova:memory>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:disk>1</nova:disk>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:swap>0</nova:swap>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:ephemeral>0</nova:ephemeral>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:vcpus>1</nova:vcpus>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </nova:flavor>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:owner>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:user uuid="054b4e3fa290475c906614f7e45d128f">tempest-TestNetworkBasicOps-1700707940-project-member</nova:user>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:project uuid="aa1916e2334f470ea8eeda213ef84cc5">tempest-TestNetworkBasicOps-1700707940</nova:project>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </nova:owner>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:root type="image" uuid="07f64171-cfd1-4482-a545-07063cf7c3f2"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <nova:ports>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <nova:port uuid="18710daa-8d5e-46b6-b666-18b4e461fca4">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:          <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        </nova:port>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </nova:ports>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </nova:instance>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </metadata>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <sysinfo type="smbios">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <system>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="manufacturer">RDO</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="product">OpenStack Compute</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="serial">40dfeea3-c0b1-49c0-959b-7a08ceb7035c</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="uuid">40dfeea3-c0b1-49c0-959b-7a08ceb7035c</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <entry name="family">Virtual Machine</entry>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </system>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </sysinfo>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <os>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <type arch="x86_64" machine="q35">hvm</type>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <boot dev="hd"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <smbios mode="sysinfo"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <features>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <acpi/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <apic/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <vmcoreinfo/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </features>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <clock offset="utc">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <timer name="pit" tickpolicy="delay"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <timer name="rtc" tickpolicy="catchup"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <timer name="hpet" present="no"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </clock>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <cpu mode="host-model" match="exact">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <topology sockets="1" cores="1" threads="1"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </cpu>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  <devices>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <disk type="network" device="disk">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <target dev="vda" bus="virtio"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <disk type="network" device="cdrom">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <target dev="sda" bus="sata"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <interface type="ethernet">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <mac address="fa:16:3e:9c:b7:e6"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <driver name="vhost" rx_queue_size="512"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <mtu size="1442"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <target dev="tap18710daa-8d"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </interface>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <serial type="pty">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <log file="/var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/console.log" append="off"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </serial>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <video>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </video>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <input type="tablet" bus="usb"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <rng model="virtio">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <backend model="random">/dev/urandom</backend>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </rng>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <controller type="usb" index="0"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    <memballoon model="virtio">
Mar  1 05:06:10 np0005634532 nova_compute[257049]:      <stats period="10"/>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:    </memballoon>
Mar  1 05:06:10 np0005634532 nova_compute[257049]:  </devices>
Mar  1 05:06:10 np0005634532 nova_compute[257049]: </domain>
Mar  1 05:06:10 np0005634532 nova_compute[257049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.146 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Preparing to wait for external event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.146 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.147 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.147 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.147 257053 DEBUG nova.virt.libvirt.vif [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1526607679',display_name='tempest-TestNetworkBasicOps-server-1526607679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1526607679',id=2,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPOo6IiOdgl08MDX0YTwpAsCaTPrYIkzkU1Ftv4CN2J5/2ENMci/xJ9cEgaU2o/8KJxbYBsQwJafBOlW5S2iIz7UCJ7gVSyLn/I+QptJTMWQZLaNk8wlBiSePC39pcVr5w==',key_name='tempest-TestNetworkBasicOps-1335120467',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-9w4oienf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:06:02Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=40dfeea3-c0b1-49c0-959b-7a08ceb7035c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.148 257053 DEBUG nova.network.os_vif_util [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.148 257053 DEBUG nova.network.os_vif_util [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.148 257053 DEBUG os_vif [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.176 257053 DEBUG ovsdbapp.backend.ovs_idl [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.177 257053 DEBUG ovsdbapp.backend.ovs_idl [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.177 257053 DEBUG ovsdbapp.backend.ovs_idl [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.177 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.178 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [POLLOUT] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.178 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.179 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.180 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.182 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.191 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.192 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.192 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.193 257053 INFO oslo.privsep.daemon [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp35zrnq8p/privsep.sock']#033[00m
Mar  1 05:06:10 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:10 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:10.833 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.846 257053 INFO oslo.privsep.daemon [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.754 262760 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.759 262760 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.762 262760 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.762 262760 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262760#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.936 257053 DEBUG nova.network.neutron [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Updated VIF entry in instance network info cache for port 18710daa-8d5e-46b6-b666-18b4e461fca4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.937 257053 DEBUG nova.network.neutron [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Updating instance_info_cache with network_info: [{"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:06:10 np0005634532 nova_compute[257049]: 2026-03-01 10:06:10.954 257053 DEBUG oslo_concurrency.lockutils [req-b0b6b7a2-8887-4e69-97f2-d7876d6b751d req-27446146-85d9-44b7-ba1f-18165746a286 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-40dfeea3-c0b1-49c0-959b-7a08ceb7035c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:06:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v711: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.147 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.148 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18710daa-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.149 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18710daa-8d, col_values=(('external_ids', {'iface-id': '18710daa-8d5e-46b6-b666-18b4e461fca4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9c:b7:e6', 'vm-uuid': '40dfeea3-c0b1-49c0-959b-7a08ceb7035c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.184 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:11 np0005634532 NetworkManager[49996]: <info>  [1772359571.1854] manager: (tap18710daa-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.187 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.190 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.191 257053 INFO os_vif [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d')#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.229 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.230 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.230 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No VIF found with MAC fa:16:3e:9c:b7:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.231 257053 INFO nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Using config drive#033[00m
Mar  1 05:06:11 np0005634532 nova_compute[257049]: 2026-03-01 10:06:11.261 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:11.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:11 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.102 257053 INFO nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Creating config drive at /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.106 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf5adr049 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.230 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpf5adr049" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.265 257053 DEBUG nova.storage.rbd_utils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.268 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:12 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.418 257053 DEBUG oslo_concurrency.processutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config 40dfeea3-c0b1-49c0-959b-7a08ceb7035c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.419 257053 INFO nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Deleting local config drive /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c/disk.config because it was imported into RBD.#033[00m
Mar  1 05:06:12 np0005634532 systemd[1]: Starting libvirt secret daemon...
Mar  1 05:06:12 np0005634532 systemd[1]: Started libvirt secret daemon.
Mar  1 05:06:12 np0005634532 kernel: tun: Universal TUN/TAP device driver, 1.6
Mar  1 05:06:12 np0005634532 NetworkManager[49996]: <info>  [1772359572.5211] manager: (tap18710daa-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Mar  1 05:06:12 np0005634532 kernel: tap18710daa-8d: entered promiscuous mode
Mar  1 05:06:12 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:12Z|00027|binding|INFO|Claiming lport 18710daa-8d5e-46b6-b666-18b4e461fca4 for this chassis.
Mar  1 05:06:12 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:12Z|00028|binding|INFO|18710daa-8d5e-46b6-b666-18b4e461fca4: Claiming fa:16:3e:9c:b7:e6 10.100.0.25
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.523 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.526 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:12 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:12.535 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:b7:e6 10.100.0.25'], port_security=['fa:16:3e:9c:b7:e6 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '40dfeea3-c0b1-49c0-959b-7a08ceb7035c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ebfe9084-f9e3-42d2-aab8-330ac8777edd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10c931ba-9a67-46c6-85b5-09252a69e0b7, chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=18710daa-8d5e-46b6-b666-18b4e461fca4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:06:12 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:12.536 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 18710daa-8d5e-46b6-b666-18b4e461fca4 in datapath a5627193-ae81-4a0c-8614-ca8ee1d557da bound to our chassis#033[00m
Mar  1 05:06:12 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:12.538 167541 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a5627193-ae81-4a0c-8614-ca8ee1d557da#033[00m
Mar  1 05:06:12 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:12.539 167541 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpgbomr2yp/privsep.sock']#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.549 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:12 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:12Z|00029|binding|INFO|Setting lport 18710daa-8d5e-46b6-b666-18b4e461fca4 ovn-installed in OVS
Mar  1 05:06:12 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:12Z|00030|binding|INFO|Setting lport 18710daa-8d5e-46b6-b666-18b4e461fca4 up in Southbound
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.553 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:12 np0005634532 systemd-udevd[262861]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:06:12 np0005634532 systemd-machined[221390]: New machine qemu-1-instance-00000002.
Mar  1 05:06:12 np0005634532 NetworkManager[49996]: <info>  [1772359572.5699] device (tap18710daa-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 05:06:12 np0005634532 NetworkManager[49996]: <info>  [1772359572.5705] device (tap18710daa-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Mar  1 05:06:12 np0005634532 systemd[1]: Started Virtual Machine qemu-1-instance-00000002.
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.762 257053 DEBUG nova.compute.manager [req-9000673c-8666-454c-81f0-9eccf24526b5 req-ea4857af-fa53-4078-b259-c17826a5d593 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.763 257053 DEBUG oslo_concurrency.lockutils [req-9000673c-8666-454c-81f0-9eccf24526b5 req-ea4857af-fa53-4078-b259-c17826a5d593 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.763 257053 DEBUG oslo_concurrency.lockutils [req-9000673c-8666-454c-81f0-9eccf24526b5 req-ea4857af-fa53-4078-b259-c17826a5d593 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.763 257053 DEBUG oslo_concurrency.lockutils [req-9000673c-8666-454c-81f0-9eccf24526b5 req-ea4857af-fa53-4078-b259-c17826a5d593 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:12 np0005634532 nova_compute[257049]: 2026-03-01 10:06:12.764 257053 DEBUG nova.compute.manager [req-9000673c-8666-454c-81f0-9eccf24526b5 req-ea4857af-fa53-4078-b259-c17826a5d593 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Processing event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Mar  1 05:06:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:12.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v712: 353 pgs: 353 active+clean; 167 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.211 167541 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.212 167541 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgbomr2yp/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.102 262878 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.106 262878 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.109 262878 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.109 262878 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262878#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.214 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[52c36c82-dbe5-4c4e-bac3-875b485e5897]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:13.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.929 262878 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.929 262878 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:13.929 262878 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:13 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.016 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.017 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359574.017174, 40dfeea3-c0b1-49c0-959b-7a08ceb7035c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.017 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] VM Started (Lifecycle Event)#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.030 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.034 257053 INFO nova.virt.libvirt.driver [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Instance spawned successfully.#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.035 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.037 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.039 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.062 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.063 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.063 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.064 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.064 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.064 257053 DEBUG nova.virt.libvirt.driver [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.068 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.097 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.097 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359574.01941, 40dfeea3-c0b1-49c0-959b-7a08ceb7035c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.097 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] VM Paused (Lifecycle Event)#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.220 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.223 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359574.0212102, 40dfeea3-c0b1-49c0-959b-7a08ceb7035c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.223 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] VM Resumed (Lifecycle Event)#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.227 257053 INFO nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Took 11.24 seconds to spawn the instance on the hypervisor.#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.228 257053 DEBUG nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.255 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.258 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:06:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:14 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.290 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.292 257053 INFO nova.compute.manager [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Took 12.14 seconds to build instance.#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.306 257053 DEBUG oslo_concurrency.lockutils [None req-a8e37f35-5bc6-4417-8516-e015218e6a3d 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:14.839 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.847 257053 DEBUG nova.compute.manager [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.848 257053 DEBUG oslo_concurrency.lockutils [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.848 257053 DEBUG oslo_concurrency.lockutils [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.848 257053 DEBUG oslo_concurrency.lockutils [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.849 257053 DEBUG nova.compute.manager [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] No waiting events found dispatching network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:06:14 np0005634532 nova_compute[257049]: 2026-03-01 10:06:14.849 257053 WARNING nova.compute.manager [req-65ba0309-bf60-4540-9ae4-59dfde628bd7 req-b1288744-a98c-43d9-8663-1c8990b1f5b8 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received unexpected event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.950 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[ba23d2e2-26c2-4263-995e-6b3f4f3ece00]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.951 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa5627193-a1 in ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.952 262878 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa5627193-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.952 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[3c616d6f-2774-44fc-9ab5-4897fb621534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.956 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[dddec742-8517-497e-bd01-4061ffbb6403]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v713: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Mar  1 05:06:14 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:14.976 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[283b63ab-3e89-45cd-a548-caff5bbc3fab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.023 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ed328a-8604-46b9-aeab-b053bd0ab476]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.025 167541 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmppbzzual9/privsep.sock']#033[00m
Mar  1 05:06:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.692 167541 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.693 167541 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmppbzzual9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.562 262940 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.565 262940 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.567 262940 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.568 262940 INFO oslo.privsep.daemon [-] privsep daemon running as pid 262940#033[00m
Mar  1 05:06:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:15.696 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[f433659f-de30-44fb-bb1d-8f4bd4f49528]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:15 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8580031e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:16 np0005634532 nova_compute[257049]: 2026-03-01 10:06:16.186 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.245 262940 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.246 262940 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.246 262940 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:16 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.805 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[7c466e31-7a2d-4307-8a17-988af22fe82f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 NetworkManager[49996]: <info>  [1772359576.8222] manager: (tapa5627193-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/25)
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.821 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[2853765f-bb6a-4ed1-8c0d-6c5b7c9b6c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:16.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:16 np0005634532 systemd-udevd[262979]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.846 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[f002ad73-46fd-4bbd-a160-eaf1805bbc83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.849 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[04a3e2dc-ca78-43ea-8245-674ea998423a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:16 np0005634532 NetworkManager[49996]: <info>  [1772359576.8732] device (tapa5627193-a0): carrier: link connected
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.878 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[b6284c88-0807-448c-a248-aba42b0e9c83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.895 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[3603dd2e-9ca0-4d82-9cad-b3ad823a336a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5627193-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b2:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393346, 'reachable_time': 41996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 262997, 'error': None, 'target': 'ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.913 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[bedb8e9a-8674-4dae-a84e-7d562639422d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:b24d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393346, 'tstamp': 393346}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 262998, 'error': None, 'target': 'ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.930 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[ce729288-c151-4d46-97e7-53fb4bd99e56]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa5627193-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:b2:4d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 13], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393346, 'reachable_time': 41996, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 262999, 'error': None, 'target': 'ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v714: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 36 op/s
Mar  1 05:06:16 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:16.962 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[fb4c76a7-07b5-4552-b8e5-5f43b88148de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.015 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[db8ce7f9-9052-4430-ae49-d5e9835eb689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.018 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5627193-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.018 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.018 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa5627193-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:17 np0005634532 kernel: tapa5627193-a0: entered promiscuous mode
Mar  1 05:06:17 np0005634532 nova_compute[257049]: 2026-03-01 10:06:17.021 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:17 np0005634532 NetworkManager[49996]: <info>  [1772359577.0227] manager: (tapa5627193-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Mar  1 05:06:17 np0005634532 nova_compute[257049]: 2026-03-01 10:06:17.022 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.026 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa5627193-a0, col_values=(('external_ids', {'iface-id': '41594761-0e96-45c0-95e4-5872d8184457'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:17 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:17Z|00031|binding|INFO|Releasing lport 41594761-0e96-45c0-95e4-5872d8184457 from this chassis (sb_readonly=0)
Mar  1 05:06:17 np0005634532 nova_compute[257049]: 2026-03-01 10:06:17.028 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:17 np0005634532 nova_compute[257049]: 2026-03-01 10:06:17.028 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.030 167541 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a5627193-ae81-4a0c-8614-ca8ee1d557da.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a5627193-ae81-4a0c-8614-ca8ee1d557da.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.031 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[b0660701-a866-4568-8979-d1e844dfe8c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.032 167541 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: global
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    log         /dev/log local0 debug
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    log-tag     haproxy-metadata-proxy-a5627193-ae81-4a0c-8614-ca8ee1d557da
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    user        root
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    group       root
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    maxconn     1024
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    pidfile     /var/lib/neutron/external/pids/a5627193-ae81-4a0c-8614-ca8ee1d557da.pid.haproxy
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    daemon
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: defaults
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    log global
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    mode http
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    option httplog
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    option dontlognull
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    option http-server-close
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    option forwardfor
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    retries                 3
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    timeout http-request    30s
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    timeout connect         30s
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    timeout client          32s
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    timeout server          32s
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    timeout http-keep-alive 30s
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: listen listener
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    bind 169.254.169.254:80
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    server metadata /var/lib/neutron/metadata_proxy
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]:    http-request add-header X-OVN-Network-ID a5627193-ae81-4a0c-8614-ca8ee1d557da
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Mar  1 05:06:17 np0005634532 nova_compute[257049]: 2026-03-01 10:06:17.032 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:17 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:17.034 167541 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'env', 'PROCESS_TAG=haproxy-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a5627193-ae81-4a0c-8614-ca8ee1d557da.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Mar  1 05:06:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:17] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:17] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:06:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:17.202Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:06:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:17.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:06:17 np0005634532 podman[263032]: 2026-03-01 10:06:17.369389877 +0000 UTC m=+0.029280244 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:06:17
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'default.rgw.meta', '.nfs', 'vms', '.mgr', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.log', 'volumes']
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:06:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:17.531 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:17 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864002800 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011080044930650518 of space, bias 1.0, pg target 0.33240134791951553 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:06:17 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:06:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v715: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 49 op/s
Mar  1 05:06:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff8840012e0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_41] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400ad90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:19.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:06:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:06:19 np0005634532 nova_compute[257049]: 2026-03-01 10:06:19.283 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:06:19 np0005634532 podman[263032]: 2026-03-01 10:06:19.322676916 +0000 UTC m=+1.982567263 container create ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS)
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:06:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:06:19 np0005634532 systemd[1]: Started libpod-conmon-ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c.scope.
Mar  1 05:06:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a0bec56f177cc61816f9cd85215692be7dffdee2aa2fe12fe7ac5abed64a248/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:19 np0005634532 podman[263032]: 2026-03-01 10:06:19.420292296 +0000 UTC m=+2.080182663 container init ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:06:19 np0005634532 podman[263032]: 2026-03-01 10:06:19.426974161 +0000 UTC m=+2.086864508 container start ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Mar  1 05:06:19 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [NOTICE]   (263055) : New worker (263057) forked
Mar  1 05:06:19 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [NOTICE]   (263055) : Loading success.
Mar  1 05:06:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:19.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:19 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400ad90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v716: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Mar  1 05:06:21 np0005634532 nova_compute[257049]: 2026-03-01 10:06:21.190 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878002490 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864005730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:21.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:21.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:21 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v717: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Mar  1 05:06:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400ad90 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff878004af0 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:23.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:23.537 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:23 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_42] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864005730 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:23.879 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:23.880 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:23.880 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:24 np0005634532 nova_compute[257049]: 2026-03-01 10:06:24.285 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4662] manager: (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/27)
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4668] device (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <warn>  [1772359584.4671] device (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 05:06:24 np0005634532 nova_compute[257049]: 2026-03-01 10:06:24.465 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4684] manager: (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/28)
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4688] device (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <warn>  [1772359584.4689] device (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4699] manager: (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4707] manager: (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4713] device (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Mar  1 05:06:24 np0005634532 NetworkManager[49996]: <info>  [1772359584.4717] device (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Mar  1 05:06:24 np0005634532 nova_compute[257049]: 2026-03-01 10:06:24.482 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:24 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:24Z|00032|binding|INFO|Releasing lport 41594761-0e96-45c0-95e4-5872d8184457 from this chassis (sb_readonly=0)
Mar  1 05:06:24 np0005634532 nova_compute[257049]: 2026-03-01 10:06:24.501 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v718: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Mar  1 05:06:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff88400ad90 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_39] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:25.242 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:25.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:25 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_43] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff868005150 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:26 np0005634532 nova_compute[257049]: 2026-03-01 10:06:26.194 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:26 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:26Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9c:b7:e6 10.100.0.25
Mar  1 05:06:26 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:26Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9c:b7:e6 10.100.0.25
Mar  1 05:06:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v719: 353 pgs: 353 active+clean; 167 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1023 B/s wr, 69 op/s
Mar  1 05:06:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:06:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:06:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:27.203Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:06:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_35] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff85c000b60 fd 48 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:27 np0005634532 kernel: ganesha.nfsd[262439]: segfault at 50 ip 00007ff90e9a232e sp 00007ff86fffe210 error 4 in libntirpc.so.5.8[7ff90e987000+2c000] likely on CPU 1 (core 0, socket 1)
Mar  1 05:06:27 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 05:06:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:27.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[228618]: 01/03/2026 10:06:27 : epoch 69a40d97 : compute-0 : ganesha.nfsd-2[svc_45] rpc :TIRPC :EVENT :svc_vc_recv: 0x7ff864005730 fd 48 proxy ignored for local
Mar  1 05:06:27 np0005634532 systemd[1]: Started Process Core Dump (PID 263077/UID 0).
Mar  1 05:06:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:27.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v720: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Mar  1 05:06:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:29.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:29 np0005634532 nova_compute[257049]: 2026-03-01 10:06:29.288 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:29.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:29 np0005634532 systemd-coredump[263078]: Process 228622 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 88:#012#0  0x00007ff90e9a232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 05:06:29 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:06:29 np0005634532 systemd[1]: systemd-coredump@8-263077-0.service: Deactivated successfully.
Mar  1 05:06:29 np0005634532 podman[263088]: 2026-03-01 10:06:29.866560176 +0000 UTC m=+0.024008664 container died a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:06:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e45f52de3124181581ed1f03742eb2f3bf7990362f8f65484705dc440f097b55-merged.mount: Deactivated successfully.
Mar  1 05:06:29 np0005634532 podman[263088]: 2026-03-01 10:06:29.900396911 +0000 UTC m=+0.057845399 container remove a57ebcf6750112db210220de1f025aaf61a68fa4b2b55a340c886fbd7479c05a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:06:29 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 05:06:30 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 05:06:30 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.800s CPU time.
Mar  1 05:06:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v721: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Mar  1 05:06:31 np0005634532 nova_compute[257049]: 2026-03-01 10:06:31.199 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:31.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:31.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:06:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:06:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v722: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:06:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:33.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:33 np0005634532 podman[263136]: 2026-03-01 10:06:33.411504621 +0000 UTC m=+0.100185605 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Mar  1 05:06:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:33.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:34 np0005634532 nova_compute[257049]: 2026-03-01 10:06:34.329 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v723: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Mar  1 05:06:34 np0005634532 nova_compute[257049]: 2026-03-01 10:06:34.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:35.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:35.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100635 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.610 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.611 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.611 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.612 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.613 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.615 257053 INFO nova.compute.manager [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Terminating instance#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.617 257053 DEBUG nova.compute.manager [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Mar  1 05:06:35 np0005634532 kernel: tap18710daa-8d (unregistering): left promiscuous mode
Mar  1 05:06:35 np0005634532 NetworkManager[49996]: <info>  [1772359595.6688] device (tap18710daa-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Mar  1 05:06:35 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:35Z|00033|binding|INFO|Releasing lport 18710daa-8d5e-46b6-b666-18b4e461fca4 from this chassis (sb_readonly=0)
Mar  1 05:06:35 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:35Z|00034|binding|INFO|Setting lport 18710daa-8d5e-46b6-b666-18b4e461fca4 down in Southbound
Mar  1 05:06:35 np0005634532 ovn_controller[157082]: 2026-03-01T10:06:35Z|00035|binding|INFO|Removing iface tap18710daa-8d ovn-installed in OVS
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.696 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.699 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.704 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9c:b7:e6 10.100.0.25'], port_security=['fa:16:3e:9c:b7:e6 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/28', 'neutron:device_id': '40dfeea3-c0b1-49c0-959b-7a08ceb7035c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ebfe9084-f9e3-42d2-aab8-330ac8777edd', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=10c931ba-9a67-46c6-85b5-09252a69e0b7, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=18710daa-8d5e-46b6-b666-18b4e461fca4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f611def4670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.706 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.708 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 18710daa-8d5e-46b6-b666-18b4e461fca4 in datapath a5627193-ae81-4a0c-8614-ca8ee1d557da unbound from our chassis#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.710 167541 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a5627193-ae81-4a0c-8614-ca8ee1d557da, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.714 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[46ff6522-c5b1-4f66-82b1-5032abb9d82d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.714 167541 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da namespace which is not needed anymore#033[00m
Mar  1 05:06:35 np0005634532 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Deactivated successfully.
Mar  1 05:06:35 np0005634532 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000002.scope: Consumed 14.216s CPU time.
Mar  1 05:06:35 np0005634532 systemd-machined[221390]: Machine qemu-1-instance-00000002 terminated.
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.836 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.843 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.849 257053 INFO nova.virt.libvirt.driver [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Instance destroyed successfully.#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.849 257053 DEBUG nova.objects.instance [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'resources' on Instance uuid 40dfeea3-c0b1-49c0-959b-7a08ceb7035c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [NOTICE]   (263055) : haproxy version is 2.8.14-c23fe91
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [NOTICE]   (263055) : path to executable is /usr/sbin/haproxy
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [WARNING]  (263055) : Exiting Master process...
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [WARNING]  (263055) : Exiting Master process...
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [ALERT]    (263055) : Current worker (263057) exited with code 143 (Terminated)
Mar  1 05:06:35 np0005634532 neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da[263051]: [WARNING]  (263055) : All workers exited. Exiting... (0)
Mar  1 05:06:35 np0005634532 systemd[1]: libpod-ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c.scope: Deactivated successfully.
Mar  1 05:06:35 np0005634532 podman[263189]: 2026-03-01 10:06:35.862272555 +0000 UTC m=+0.053333608 container died ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.862 257053 DEBUG nova.virt.libvirt.vif [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-03-01T10:06:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1526607679',display_name='tempest-TestNetworkBasicOps-server-1526607679',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1526607679',id=2,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPOo6IiOdgl08MDX0YTwpAsCaTPrYIkzkU1Ftv4CN2J5/2ENMci/xJ9cEgaU2o/8KJxbYBsQwJafBOlW5S2iIz7UCJ7gVSyLn/I+QptJTMWQZLaNk8wlBiSePC39pcVr5w==',key_name='tempest-TestNetworkBasicOps-1335120467',keypairs=<?>,launch_index=0,launched_at=2026-03-01T10:06:14Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-9w4oienf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-03-01T10:06:14Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=40dfeea3-c0b1-49c0-959b-7a08ceb7035c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.863 257053 DEBUG nova.network.os_vif_util [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "18710daa-8d5e-46b6-b666-18b4e461fca4", "address": "fa:16:3e:9c:b7:e6", "network": {"id": "a5627193-ae81-4a0c-8614-ca8ee1d557da", "bridge": "br-int", "label": "tempest-network-smoke--1604495469", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18710daa-8d", "ovs_interfaceid": "18710daa-8d5e-46b6-b666-18b4e461fca4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.864 257053 DEBUG nova.network.os_vif_util [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.865 257053 DEBUG os_vif [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.866 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.867 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18710daa-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.870 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.873 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.876 257053 INFO os_vif [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9c:b7:e6,bridge_name='br-int',has_traffic_filtering=True,id=18710daa-8d5e-46b6-b666-18b4e461fca4,network=Network(a5627193-ae81-4a0c-8614-ca8ee1d557da),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18710daa-8d')#033[00m
Mar  1 05:06:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c-userdata-shm.mount: Deactivated successfully.
Mar  1 05:06:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7a0bec56f177cc61816f9cd85215692be7dffdee2aa2fe12fe7ac5abed64a248-merged.mount: Deactivated successfully.
Mar  1 05:06:35 np0005634532 podman[263189]: 2026-03-01 10:06:35.899599087 +0000 UTC m=+0.090660140 container cleanup ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:06:35 np0005634532 systemd[1]: libpod-conmon-ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c.scope: Deactivated successfully.
Mar  1 05:06:35 np0005634532 podman[263243]: 2026-03-01 10:06:35.96613715 +0000 UTC m=+0.046914859 container remove ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS)
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.970 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[e5594b09-f91c-4316-b9af-f74fceee73cf]: (4, ('Sun Mar  1 10:06:35 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da (ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c)\nae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c\nSun Mar  1 10:06:35 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da (ae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c)\nae56e22f178162fd05544340ff1f9072acc939c8a6d0be49600b69ba8824613c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.972 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[67c0ab2f-aa5d-4f30-bbb7-a9aeb47fe50f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.973 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa5627193-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.974 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 kernel: tapa5627193-a0: left promiscuous mode
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.976 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.978 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[0170d329-1ec6-48cf-b2aa-f28423cea7d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:35 np0005634532 nova_compute[257049]: 2026-03-01 10:06:35.980 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.994 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[dafc3fe1-9ba8-444d-852c-1e1107ee1a4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:35 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:35.996 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[12864271-9d55-464f-bfac-ae2695e4ec3f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:36 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:36.007 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[a51761bd-b162-4339-b634-508b3c529ed4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393339, 'reachable_time': 39553, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 263285, 'error': None, 'target': 'ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:36 np0005634532 systemd[1]: run-netns-ovnmeta\x2da5627193\x2dae81\x2d4a0c\x2d8614\x2dca8ee1d557da.mount: Deactivated successfully.
Mar  1 05:06:36 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:36.017 167914 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a5627193-ae81-4a0c-8614-ca8ee1d557da deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Mar  1 05:06:36 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:06:36.018 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[73c1b568-d11e-4e2a-a11e-42d072f97eb8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.424 257053 INFO nova.virt.libvirt.driver [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Deleting instance files /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c_del#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.425 257053 INFO nova.virt.libvirt.driver [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Deletion of /var/lib/nova/instances/40dfeea3-c0b1-49c0-959b-7a08ceb7035c_del complete#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.521 257053 DEBUG nova.compute.manager [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-unplugged-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.522 257053 DEBUG oslo_concurrency.lockutils [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.522 257053 DEBUG oslo_concurrency.lockutils [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.522 257053 DEBUG oslo_concurrency.lockutils [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.523 257053 DEBUG nova.compute.manager [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] No waiting events found dispatching network-vif-unplugged-18710daa-8d5e-46b6-b666-18b4e461fca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.523 257053 DEBUG nova.compute.manager [req-9fd41610-8917-4b39-87a2-bb86c49704de req-f1b132f8-d8e2-42e9-b42c-b15f6a3fefbc 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-unplugged-18710daa-8d5e-46b6-b666-18b4e461fca4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.574 257053 DEBUG nova.virt.libvirt.host [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.575 257053 INFO nova.virt.libvirt.host [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] UEFI support detected#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.577 257053 INFO nova.compute.manager [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.578 257053 DEBUG oslo.service.loopingcall [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.579 257053 DEBUG nova.compute.manager [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.580 257053 DEBUG nova.network.neutron [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Mar  1 05:06:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v724: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 356 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:36 np0005634532 nova_compute[257049]: 2026-03-01 10:06:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:06:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:06:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:37.204Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:06:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:37.257 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:37 np0005634532 podman[263289]: 2026-03-01 10:06:37.415281289 +0000 UTC m=+0.095573792 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Mar  1 05:06:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:37.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.753 257053 DEBUG nova.network.neutron [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.776 257053 INFO nova.compute.manager [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Took 1.20 seconds to deallocate network for instance.#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.839 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.840 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:06:37 np0005634532 nova_compute[257049]: 2026-03-01 10:06:37.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.186 257053 DEBUG oslo_concurrency.processutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.209 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.618 257053 DEBUG nova.compute.manager [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.619 257053 DEBUG oslo_concurrency.lockutils [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.619 257053 DEBUG oslo_concurrency.lockutils [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.619 257053 DEBUG oslo_concurrency.lockutils [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.620 257053 DEBUG nova.compute.manager [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] No waiting events found dispatching network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.620 257053 WARNING nova.compute.manager [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received unexpected event network-vif-plugged-18710daa-8d5e-46b6-b666-18b4e461fca4 for instance with vm_state deleted and task_state None.#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.620 257053 DEBUG nova.compute.manager [req-994a6c5d-2525-47b6-b9ad-16b9094efdfe req-02675ab7-c68e-4a34-9a8e-1ad5b70ec1e5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Received event network-vif-deleted-18710daa-8d5e-46b6-b666-18b4e461fca4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:06:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:06:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305445848' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.668 257053 DEBUG oslo_concurrency.processutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.675 257053 DEBUG nova.compute.provider_tree [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.705 257053 ERROR nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [req-2ade885d-9054-4604-a39d-090f1252f794] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 018d246d-1e01-4168-9128-598c5501111b.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-2ade885d-9054-4604-a39d-090f1252f794"}]}#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.723 257053 DEBUG nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.745 257053 DEBUG nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.746 257053 DEBUG nova.compute.provider_tree [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.765 257053 DEBUG nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.784 257053 DEBUG nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:06:38 np0005634532 nova_compute[257049]: 2026-03-01 10:06:38.814 257053 DEBUG oslo_concurrency.processutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v725: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 374 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Mar  1 05:06:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:06:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3385209951' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.235 257053 DEBUG oslo_concurrency.processutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.420s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.240 257053 DEBUG nova.compute.provider_tree [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:06:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:39.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.290 257053 DEBUG nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updated inventory for provider 018d246d-1e01-4168-9128-598c5501111b with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.291 257053 DEBUG nova.compute.provider_tree [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating resource provider 018d246d-1e01-4168-9128-598c5501111b generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.291 257053 DEBUG nova.compute.provider_tree [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.377 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.395 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.397 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 1.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.397 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.397 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.398 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.447 257053 INFO nova.scheduler.client.report [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Deleted allocations for instance 40dfeea3-c0b1-49c0-959b-7a08ceb7035c#033[00m
Mar  1 05:06:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:39.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.578 257053 DEBUG oslo_concurrency.lockutils [None req-efe45325-9221-470e-b0ea-28096bfe1ed7 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "40dfeea3-c0b1-49c0-959b-7a08ceb7035c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:06:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/725249077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.812 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.947 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.948 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4614MB free_disk=59.94267654418945GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.948 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:06:39 np0005634532 nova_compute[257049]: 2026-03-01 10:06:39.948 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.008 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.008 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.022 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:06:40 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 9.
Mar  1 05:06:40 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:06:40 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.800s CPU time.
Mar  1 05:06:40 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 05:06:40 np0005634532 podman[263448]: 2026-03-01 10:06:40.240150911 +0000 UTC m=+0.043866725 container create 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325)
Mar  1 05:06:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346658b102115a55acc4701e270d5e5f8a106c57b5226787cdf2e15787adccd9/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346658b102115a55acc4701e270d5e5f8a106c57b5226787cdf2e15787adccd9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346658b102115a55acc4701e270d5e5f8a106c57b5226787cdf2e15787adccd9/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/346658b102115a55acc4701e270d5e5f8a106c57b5226787cdf2e15787adccd9/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:40 np0005634532 podman[263448]: 2026-03-01 10:06:40.303432773 +0000 UTC m=+0.107148587 container init 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:06:40 np0005634532 podman[263448]: 2026-03-01 10:06:40.306838988 +0000 UTC m=+0.110554792 container start 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:06:40 np0005634532 podman[263448]: 2026-03-01 10:06:40.217481321 +0000 UTC m=+0.021197155 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:40 np0005634532 bash[263448]: 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372
Mar  1 05:06:40 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 05:06:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:06:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:06:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2877681283' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.495 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.500 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.520 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.554 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.555 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:06:40 np0005634532 nova_compute[257049]: 2026-03-01 10:06:40.870 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v726: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Mar  1 05:06:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:41.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.552 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.553 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:41.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.578 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.579 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.579 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.598 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.598 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:06:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.919 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:41 np0005634532 nova_compute[257049]: 2026-03-01 10:06:41.967 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v727: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 16 KiB/s wr, 28 op/s
Mar  1 05:06:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:43.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:43.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:44 np0005634532 nova_compute[257049]: 2026-03-01 10:06:44.380 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v728: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 17 KiB/s wr, 48 op/s
Mar  1 05:06:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:45.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:45.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:45 np0005634532 nova_compute[257049]: 2026-03-01 10:06:45.875 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v729: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 3.3 KiB/s wr, 47 op/s
Mar  1 05:06:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:06:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:47.205Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:06:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:47.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:47 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:06:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:47 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:06:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:06:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:06:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:47.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:06:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:06:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v730: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 4.2 KiB/s wr, 59 op/s
Mar  1 05:06:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:49.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:49 np0005634532 nova_compute[257049]: 2026-03-01 10:06:49.381 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:49.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:50 np0005634532 nova_compute[257049]: 2026-03-01 10:06:50.848 257053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772359595.8464634, 40dfeea3-c0b1-49c0-959b-7a08ceb7035c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:06:50 np0005634532 nova_compute[257049]: 2026-03-01 10:06:50.850 257053 INFO nova.compute.manager [-] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] VM Stopped (Lifecycle Event)#033[00m
Mar  1 05:06:50 np0005634532 nova_compute[257049]: 2026-03-01 10:06:50.876 257053 DEBUG nova.compute.manager [None req-018610d6-1270-46e9-b16b-f6f9ab5007b7 - - - - - -] [instance: 40dfeea3-c0b1-49c0-959b-7a08ceb7035c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:06:50 np0005634532 nova_compute[257049]: 2026-03-01 10:06:50.877 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v731: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 31 op/s
Mar  1 05:06:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:06:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:51.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:06:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 05:06:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:51.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v732: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 KiB/s wr, 31 op/s
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:06:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:53.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:06:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:53.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:06:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:54 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:06:54 np0005634532 nova_compute[257049]: 2026-03-01 10:06:54.383 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.482352274 +0000 UTC m=+0.042823568 container create d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 05:06:54 np0005634532 systemd[1]: Started libpod-conmon-d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e.scope.
Mar  1 05:06:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.53844684 +0000 UTC m=+0.098918154 container init d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.542717425 +0000 UTC m=+0.103188719 container start d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.545222597 +0000 UTC m=+0.105693911 container attach d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:06:54 np0005634532 systemd[1]: libpod-d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e.scope: Deactivated successfully.
Mar  1 05:06:54 np0005634532 trusting_keldysh[263727]: 167 167
Mar  1 05:06:54 np0005634532 conmon[263727]: conmon d18a1bd51b722253a383 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e.scope/container/memory.events
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.54779501 +0000 UTC m=+0.108266304 container died d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.465218421 +0000 UTC m=+0.025689745 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b026fb7a72e87bf0e9c3169ffdf2b5bac8bea8fc61c5cdb97775cacf8c782e52-merged.mount: Deactivated successfully.
Mar  1 05:06:54 np0005634532 podman[263711]: 2026-03-01 10:06:54.578332535 +0000 UTC m=+0.138803829 container remove d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 05:06:54 np0005634532 systemd[1]: libpod-conmon-d18a1bd51b722253a3835356b265e91cfdf3375a9d11c5b4edc8035396dcb04e.scope: Deactivated successfully.
Mar  1 05:06:54 np0005634532 podman[263750]: 2026-03-01 10:06:54.69235422 +0000 UTC m=+0.038293796 container create 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:06:54 np0005634532 systemd[1]: Started libpod-conmon-849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd.scope.
Mar  1 05:06:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:54 np0005634532 podman[263750]: 2026-03-01 10:06:54.674562791 +0000 UTC m=+0.020502387 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:54 np0005634532 podman[263750]: 2026-03-01 10:06:54.783336957 +0000 UTC m=+0.129276553 container init 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:06:54 np0005634532 podman[263750]: 2026-03-01 10:06:54.797357444 +0000 UTC m=+0.143297030 container start 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:06:54 np0005634532 podman[263750]: 2026-03-01 10:06:54.801179828 +0000 UTC m=+0.147119414 container attach 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:06:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v733: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 2.2 KiB/s wr, 31 op/s
Mar  1 05:06:55 np0005634532 friendly_almeida[263767]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:06:55 np0005634532 friendly_almeida[263767]: --> All data devices are unavailable
Mar  1 05:06:55 np0005634532 systemd[1]: libpod-849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd.scope: Deactivated successfully.
Mar  1 05:06:55 np0005634532 podman[263750]: 2026-03-01 10:06:55.137615447 +0000 UTC m=+0.483555033 container died 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Mar  1 05:06:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8c2c3d86541effc99cefda8d983d11be8c694003c2c9e0ee5dccb52f17c6ee12-merged.mount: Deactivated successfully.
Mar  1 05:06:55 np0005634532 podman[263750]: 2026-03-01 10:06:55.177250805 +0000 UTC m=+0.523190391 container remove 849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_almeida, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 05:06:55 np0005634532 systemd[1]: libpod-conmon-849aa1bb878294a87e93d8789a190cc40972d8416e3076022d3f0157e90f9cbd.scope: Deactivated successfully.
Mar  1 05:06:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:55.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:55.567 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100655 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:06:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.712226757 +0000 UTC m=+0.050663512 container create 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 05:06:55 np0005634532 systemd[1]: Started libpod-conmon-898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b.scope.
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.697619917 +0000 UTC m=+0.036056682 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:55 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.80991049 +0000 UTC m=+0.148347275 container init 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.81681038 +0000 UTC m=+0.155247125 container start 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.820215214 +0000 UTC m=+0.158651999 container attach 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 05:06:55 np0005634532 exciting_khayyam[263903]: 167 167
Mar  1 05:06:55 np0005634532 systemd[1]: libpod-898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b.scope: Deactivated successfully.
Mar  1 05:06:55 np0005634532 conmon[263903]: conmon 898ebb3958b021d58954 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b.scope/container/memory.events
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.825952686 +0000 UTC m=+0.164389421 container died 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 05:06:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-16e326a34fdd53d9389abe95ad75e8b12a45403a229e60ce64dcb28c0ca72d67-merged.mount: Deactivated successfully.
Mar  1 05:06:55 np0005634532 podman[263886]: 2026-03-01 10:06:55.863623266 +0000 UTC m=+0.202060011 container remove 898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_khayyam, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:06:55 np0005634532 nova_compute[257049]: 2026-03-01 10:06:55.879 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:55 np0005634532 systemd[1]: libpod-conmon-898ebb3958b021d58954a237a3756346fe2047069f3955e433d142d53b0e6a0b.scope: Deactivated successfully.
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.001643704 +0000 UTC m=+0.044501429 container create acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:06:56 np0005634532 systemd[1]: Started libpod-conmon-acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f.scope.
Mar  1 05:06:56 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:55.979954359 +0000 UTC m=+0.022812064 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c88e2d6de5b6c75ee606f97e2eddb64b98ab3101365b5ca59b43272a1456ed/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c88e2d6de5b6c75ee606f97e2eddb64b98ab3101365b5ca59b43272a1456ed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c88e2d6de5b6c75ee606f97e2eddb64b98ab3101365b5ca59b43272a1456ed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:56 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09c88e2d6de5b6c75ee606f97e2eddb64b98ab3101365b5ca59b43272a1456ed/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.094923557 +0000 UTC m=+0.137781342 container init acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.105022797 +0000 UTC m=+0.147880492 container start acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.108579615 +0000 UTC m=+0.151437350 container attach acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]: {
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:    "0": [
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:        {
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "devices": [
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "/dev/loop3"
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            ],
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "lv_name": "ceph_lv0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "lv_size": "21470642176",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "name": "ceph_lv0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "tags": {
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.cluster_name": "ceph",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.crush_device_class": "",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.encrypted": "0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.osd_id": "0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.type": "block",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.vdo": "0",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:                "ceph.with_tpm": "0"
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            },
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "type": "block",
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:            "vg_name": "ceph_vg0"
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:        }
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]:    ]
Mar  1 05:06:56 np0005634532 jovial_blackwell[263969]: }
Mar  1 05:06:56 np0005634532 systemd[1]: libpod-acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f.scope: Deactivated successfully.
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.395818008 +0000 UTC m=+0.438675763 container died acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 05:06:56 np0005634532 systemd[1]: var-lib-containers-storage-overlay-09c88e2d6de5b6c75ee606f97e2eddb64b98ab3101365b5ca59b43272a1456ed-merged.mount: Deactivated successfully.
Mar  1 05:06:56 np0005634532 podman[263930]: 2026-03-01 10:06:56.442389258 +0000 UTC m=+0.485246943 container remove acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 05:06:56 np0005634532 systemd[1]: libpod-conmon-acdf2b090e1117ba342fb46adb9387f103b7b35eab99fbdf65794c2e9e36672f.scope: Deactivated successfully.
Mar  1 05:06:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:06:56 np0005634532 podman[264088]: 2026-03-01 10:06:56.966323867 +0000 UTC m=+0.034242386 container create 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:06:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v734: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1023 B/s wr, 11 op/s
Mar  1 05:06:56 np0005634532 systemd[1]: Started libpod-conmon-54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f.scope.
Mar  1 05:06:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:57.038804647 +0000 UTC m=+0.106723196 container init 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:56.950092057 +0000 UTC m=+0.018010576 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:57.047344548 +0000 UTC m=+0.115263067 container start 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:57.050404334 +0000 UTC m=+0.118322873 container attach 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 05:06:57 np0005634532 amazing_sammet[264103]: 167 167
Mar  1 05:06:57 np0005634532 systemd[1]: libpod-54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f.scope: Deactivated successfully.
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:57.055857679 +0000 UTC m=+0.123776198 container died 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:06:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:06:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:06:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:06:57 np0005634532 systemd[1]: var-lib-containers-storage-overlay-88c8f74455a3ee5734ed727374a89138c2143cd8540bd305e17a23882b2465c7-merged.mount: Deactivated successfully.
Mar  1 05:06:57 np0005634532 podman[264088]: 2026-03-01 10:06:57.086186738 +0000 UTC m=+0.154105277 container remove 54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_sammet, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 05:06:57 np0005634532 systemd[1]: libpod-conmon-54847d655d13685b9e4d002305467f1beea5e8dbc2e4ffe94c01b3f3fb0eec0f.scope: Deactivated successfully.
Mar  1 05:06:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:06:57.206Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:06:57 np0005634532 podman[264127]: 2026-03-01 10:06:57.229702132 +0000 UTC m=+0.047194397 container create df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 05:06:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc4001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:57 np0005634532 systemd[1]: Started libpod-conmon-df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d.scope.
Mar  1 05:06:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:06:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:57.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:06:57 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:06:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30b809a0c3fcb5156e8714f880edc7ea1eee6745248c29956374dd3a7f19c8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30b809a0c3fcb5156e8714f880edc7ea1eee6745248c29956374dd3a7f19c8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30b809a0c3fcb5156e8714f880edc7ea1eee6745248c29956374dd3a7f19c8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:57 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b30b809a0c3fcb5156e8714f880edc7ea1eee6745248c29956374dd3a7f19c8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:06:57 np0005634532 podman[264127]: 2026-03-01 10:06:57.211272017 +0000 UTC m=+0.028764302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:06:57 np0005634532 podman[264127]: 2026-03-01 10:06:57.311643916 +0000 UTC m=+0.129136211 container init df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1)
Mar  1 05:06:57 np0005634532 podman[264127]: 2026-03-01 10:06:57.318923385 +0000 UTC m=+0.136415640 container start df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:06:57 np0005634532 podman[264127]: 2026-03-01 10:06:57.32276063 +0000 UTC m=+0.140252895 container attach df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:06:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:57.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:57 np0005634532 lvm[264219]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:06:57 np0005634532 lvm[264219]: VG ceph_vg0 finished
Mar  1 05:06:57 np0005634532 bold_herschel[264144]: {}
Mar  1 05:06:58 np0005634532 systemd[1]: libpod-df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d.scope: Deactivated successfully.
Mar  1 05:06:58 np0005634532 podman[264127]: 2026-03-01 10:06:58.000231031 +0000 UTC m=+0.817723286 container died df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:06:58 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b30b809a0c3fcb5156e8714f880edc7ea1eee6745248c29956374dd3a7f19c8b-merged.mount: Deactivated successfully.
Mar  1 05:06:58 np0005634532 podman[264127]: 2026-03-01 10:06:58.381891366 +0000 UTC m=+1.199383641 container remove df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=bold_herschel, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 05:06:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:06:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:06:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:58 np0005634532 systemd[1]: libpod-conmon-df393ae764e65dbe0b217091bf8f6dc6461075be59f19c72f9e8ca413a16e71d.scope: Deactivated successfully.
Mar  1 05:06:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v735: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 1023 B/s wr, 11 op/s
Mar  1 05:06:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:06:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:06:59.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:59 np0005634532 nova_compute[257049]: 2026-03-01 10:06:59.385 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:06:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:06:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:06:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:06:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:06:59.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:06:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:06:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:00 np0005634532 nova_compute[257049]: 2026-03-01 10:07:00.884 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v736: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 05:07:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100701 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 1ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:07:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:01.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:01.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40025c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:07:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:07:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v737: 353 pgs: 353 active+clean; 41 MiB data, 255 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 85 B/s wr, 0 op/s
Mar  1 05:07:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:03 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:03 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac0016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:03.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:03.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:03 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa40016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:04 np0005634532 nova_compute[257049]: 2026-03-01 10:07:04.387 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:04 np0005634532 podman[264267]: 2026-03-01 10:07:04.424043572 +0000 UTC m=+0.107473155 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Mar  1 05:07:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v738: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 34 op/s
Mar  1 05:07:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:05 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:05 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc8001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:05.297 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:05.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:05 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:05 np0005634532 nova_compute[257049]: 2026-03-01 10:07:05.887 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v739: 353 pgs: 353 active+clean; 88 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 33 op/s
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:07] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Mar  1 05:07:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:07] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:07.206Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:07.207Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100707 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:07 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:07 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:07.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:07.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:07 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:08 np0005634532 podman[264298]: 2026-03-01 10:07:08.391737239 +0000 UTC m=+0.070201205 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Mar  1 05:07:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v740: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Mar  1 05:07:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:09 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:09 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:09.303 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:09 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:07:09 np0005634532 nova_compute[257049]: 2026-03-01 10:07:09.439 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:09.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:09 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40032d0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:10 np0005634532 nova_compute[257049]: 2026-03-01 10:07:10.890 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v741: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Mar  1 05:07:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:11 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:11 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:11.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:11.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:11 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.889264) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631889497, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1157, "num_deletes": 250, "total_data_size": 1994585, "memory_usage": 2023784, "flush_reason": "Manual Compaction"}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631899184, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 1273053, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 22101, "largest_seqno": 23257, "table_properties": {"data_size": 1268515, "index_size": 1998, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1477, "raw_key_size": 11776, "raw_average_key_size": 20, "raw_value_size": 1258808, "raw_average_value_size": 2204, "num_data_blocks": 86, "num_entries": 571, "num_filter_entries": 571, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359536, "oldest_key_time": 1772359536, "file_creation_time": 1772359631, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 9974 microseconds, and 4450 cpu microseconds.
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.899238) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 1273053 bytes OK
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.899261) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.900671) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.900693) EVENT_LOG_v1 {"time_micros": 1772359631900686, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.900713) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 1989356, prev total WAL file size 1989356, number of live WAL files 2.
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.901417) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353030' seq:72057594037927935, type:22 .. '6D67727374617400373531' seq:0, type:0; will stop at (end)
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(1243KB)], [47(14MB)]
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631901462, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 16380401, "oldest_snapshot_seqno": -1}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 5510 keys, 12953904 bytes, temperature: kUnknown
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631962385, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 12953904, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12918018, "index_size": 20992, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13829, "raw_key_size": 139031, "raw_average_key_size": 25, "raw_value_size": 12819360, "raw_average_value_size": 2326, "num_data_blocks": 858, "num_entries": 5510, "num_filter_entries": 5510, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359631, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.962607) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 12953904 bytes
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.963616) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 268.6 rd, 212.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 14.4 +0.0 blob) out(12.4 +0.0 blob), read-write-amplify(23.0) write-amplify(10.2) OK, records in: 5986, records dropped: 476 output_compression: NoCompression
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.963633) EVENT_LOG_v1 {"time_micros": 1772359631963625, "job": 24, "event": "compaction_finished", "compaction_time_micros": 60988, "compaction_time_cpu_micros": 19344, "output_level": 6, "num_output_files": 1, "total_output_size": 12953904, "num_input_records": 5986, "num_output_records": 5510, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631963811, "job": 24, "event": "table_file_deletion", "file_number": 49}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359631964926, "job": 24, "event": "table_file_deletion", "file_number": 47}
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.901298) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.964985) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.964995) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.965028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.965032) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:11.965037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:12 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:07:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:12 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:07:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:12 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:07:12 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:12 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:07:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v742: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 39 op/s
Mar  1 05:07:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:13 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:13 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:13.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:13.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:13 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:13.790 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:07:13 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:13.791 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:07:13 np0005634532 nova_compute[257049]: 2026-03-01 10:07:13.791 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:14 np0005634532 nova_compute[257049]: 2026-03-01 10:07:14.440 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v743: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 110 op/s
Mar  1 05:07:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:15 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:15 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:15.312 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:15.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:15 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:15 np0005634532 nova_compute[257049]: 2026-03-01 10:07:15.893 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:15 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:07:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v744: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Mar  1 05:07:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:17] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:17] "GET /metrics HTTP/1.1" 200 48441 "" "Prometheus/2.51.0"
Mar  1 05:07:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:17.208Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:07:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:17 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:17 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:17.314 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:07:17
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.log', '.nfs', '.rgw.root', 'cephfs.cephfs.meta', 'vms', '.mgr', 'images', 'backups', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'cephfs.cephfs.data']
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:07:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:07:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:07:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:17.587 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:17 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:07:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:07:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v745: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 77 op/s
Mar  1 05:07:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:19 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:07:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:19 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:19 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:07:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:19.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:07:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:07:19 np0005634532 nova_compute[257049]: 2026-03-01 10:07:19.441 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:19.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:19 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:20 np0005634532 nova_compute[257049]: 2026-03-01 10:07:20.896 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v746: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 71 op/s
Mar  1 05:07:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:21 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:21 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:21.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:21.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:21 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:22 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:07:22 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:22 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:07:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:22.794 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:07:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v747: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 938 B/s wr, 71 op/s
Mar  1 05:07:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:23 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:23 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:23.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:23.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:23 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:23.880 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:07:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:23.880 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:07:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:07:23.880 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:07:23 np0005634532 ovn_controller[157082]: 2026-03-01T10:07:23Z|00036|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Mar  1 05:07:24 np0005634532 nova_compute[257049]: 2026-03-01 10:07:24.443 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v748: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Mar  1 05:07:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:25 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:07:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:25 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:25 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:25.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:25.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:25 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:25 np0005634532 nova_compute[257049]: 2026-03-01 10:07:25.899 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v749: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:27] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:07:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:27] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:27.209Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100727 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:27 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:27 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:27.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:27.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:27 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v750: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Mar  1 05:07:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:29 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:29 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:29.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:29 np0005634532 nova_compute[257049]: 2026-03-01 10:07:29.444 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:29.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:29 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:30 np0005634532 nova_compute[257049]: 2026-03-01 10:07:30.902 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v751: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Mar  1 05:07:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:31 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:31 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:31.334 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:31.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:31 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:07:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:07:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v752: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 66 op/s
Mar  1 05:07:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:33 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:33 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0002010 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:33.337 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:33.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:33 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:34 np0005634532 nova_compute[257049]: 2026-03-01 10:07:34.446 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:34 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:07:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v753: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Mar  1 05:07:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:35 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:35 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:35.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:35 np0005634532 podman[264376]: 2026-03-01 10:07:35.406109001 +0000 UTC m=+0.091853328 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20260223)
Mar  1 05:07:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:35.605 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:35 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:35 np0005634532 nova_compute[257049]: 2026-03-01 10:07:35.904 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:36 np0005634532 nova_compute[257049]: 2026-03-01 10:07:36.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:36 np0005634532 nova_compute[257049]: 2026-03-01 10:07:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v754: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 12 KiB/s wr, 1 op/s
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:07:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:37.210Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:37 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:37 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:37.343 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:37.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:37 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:37 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:07:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:37 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:07:37 np0005634532 nova_compute[257049]: 2026-03-01 10:07:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:37 np0005634532 nova_compute[257049]: 2026-03-01 10:07:37.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:07:38 np0005634532 nova_compute[257049]: 2026-03-01 10:07:38.973 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:38 np0005634532 nova_compute[257049]: 2026-03-01 10:07:38.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v755: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 15 KiB/s wr, 3 op/s
Mar  1 05:07:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:39 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0008dc0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:39 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:39.345 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:39 np0005634532 podman[264431]: 2026-03-01 10:07:39.406861645 +0000 UTC m=+0.092155137 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Mar  1 05:07:39 np0005634532 nova_compute[257049]: 2026-03-01 10:07:39.447 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:39.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:39 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:39 np0005634532 nova_compute[257049]: 2026-03-01 10:07:39.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:39 np0005634532 nova_compute[257049]: 2026-03-01 10:07:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:39 np0005634532 nova_compute[257049]: 2026-03-01 10:07:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.005 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.005 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.005 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.005 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.006 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:07:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:07:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3871776278' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.454 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.605 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.607 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4598MB free_disk=59.94272994995117GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.607 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.608 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.668 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.668 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.694 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:07:40 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:40 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:07:40 np0005634532 nova_compute[257049]: 2026-03-01 10:07:40.907 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v756: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 2.9 KiB/s wr, 3 op/s
Mar  1 05:07:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:07:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1768292586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:07:41 np0005634532 nova_compute[257049]: 2026-03-01 10:07:41.134 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:07:41 np0005634532 nova_compute[257049]: 2026-03-01 10:07:41.140 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:07:41 np0005634532 nova_compute[257049]: 2026-03-01 10:07:41.173 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:07:41 np0005634532 nova_compute[257049]: 2026-03-01 10:07:41.175 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:07:41 np0005634532 nova_compute[257049]: 2026-03-01 10:07:41.175 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:07:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:41 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:41 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:41.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:41.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:41 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:42 np0005634532 nova_compute[257049]: 2026-03-01 10:07:42.176 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:07:42 np0005634532 nova_compute[257049]: 2026-03-01 10:07:42.177 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:07:42 np0005634532 nova_compute[257049]: 2026-03-01 10:07:42.177 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:07:42 np0005634532 nova_compute[257049]: 2026-03-01 10:07:42.367 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:07:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v757: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 2.9 KiB/s wr, 3 op/s
Mar  1 05:07:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:43 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:43 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:43.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:43.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:43 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:44 np0005634532 nova_compute[257049]: 2026-03-01 10:07:44.449 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v758: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 4.0 KiB/s wr, 4 op/s
Mar  1 05:07:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:45 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:45 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:45.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:45.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:45 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:45 np0005634532 nova_compute[257049]: 2026-03-01 10:07:45.911 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v759: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 9.1 KiB/s rd, 3.9 KiB/s wr, 3 op/s
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:47.211Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100747 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:47 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:47 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:47.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:07:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:47.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:47 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:07:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:07:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v760: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 5.1 KiB/s wr, 31 op/s
Mar  1 05:07:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:49 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:49 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:49.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:49 np0005634532 nova_compute[257049]: 2026-03-01 10:07:49.512 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:49.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:49 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:50 np0005634532 nova_compute[257049]: 2026-03-01 10:07:50.913 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v761: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Mar  1 05:07:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:51 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:51 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:51.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:07:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:51.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:07:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:51 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc40047c0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v762: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Mar  1 05:07:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:53.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:53.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:53 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_3] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:54 np0005634532 nova_compute[257049]: 2026-03-01 10:07:54.547 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v763: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.2 KiB/s wr, 29 op/s
Mar  1 05:07:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:55.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:55.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:55 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:55 np0005634532 nova_compute[257049]: 2026-03-01 10:07:55.916 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.902374) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676902429, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 627, "num_deletes": 257, "total_data_size": 778609, "memory_usage": 790936, "flush_reason": "Manual Compaction"}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676908107, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 770372, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23258, "largest_seqno": 23884, "table_properties": {"data_size": 767152, "index_size": 1128, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7108, "raw_average_key_size": 17, "raw_value_size": 760690, "raw_average_value_size": 1901, "num_data_blocks": 51, "num_entries": 400, "num_filter_entries": 400, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359632, "oldest_key_time": 1772359632, "file_creation_time": 1772359676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 5756 microseconds, and 1970 cpu microseconds.
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.908136) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 770372 bytes OK
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.908149) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.909794) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.909806) EVENT_LOG_v1 {"time_micros": 1772359676909802, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.909821) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 775295, prev total WAL file size 775295, number of live WAL files 2.
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.910189) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323532' seq:72057594037927935, type:22 .. '6C6F676D00353035' seq:0, type:0; will stop at (end)
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(752KB)], [50(12MB)]
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676910230, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 13724276, "oldest_snapshot_seqno": -1}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 5388 keys, 13613729 bytes, temperature: kUnknown
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676962698, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 13613729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 13577500, "index_size": 21655, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13509, "raw_key_size": 137648, "raw_average_key_size": 25, "raw_value_size": 13479804, "raw_average_value_size": 2501, "num_data_blocks": 883, "num_entries": 5388, "num_filter_entries": 5388, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359676, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.963077) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 13613729 bytes
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.964696) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 261.0 rd, 258.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.7, 12.4 +0.0 blob) out(13.0 +0.0 blob), read-write-amplify(35.5) write-amplify(17.7) OK, records in: 5910, records dropped: 522 output_compression: NoCompression
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.964728) EVENT_LOG_v1 {"time_micros": 1772359676964713, "job": 26, "event": "compaction_finished", "compaction_time_micros": 52579, "compaction_time_cpu_micros": 24371, "output_level": 6, "num_output_files": 1, "total_output_size": 13613729, "num_input_records": 5910, "num_output_records": 5388, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676965021, "job": 26, "event": "table_file_deletion", "file_number": 52}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359676967235, "job": 26, "event": "table_file_deletion", "file_number": 50}
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.910113) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.967380) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.967389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.967392) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.967395) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:56 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:07:56.967398) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:07:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v764: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Mar  1 05:07:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:57] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:07:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:07:57] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:07:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:07:57.212Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:07:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c30 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:07:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:57.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:07:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:57.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:57 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:07:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032895178' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:07:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:07:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2032895178' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v765: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Mar  1 05:07:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:07:59.375 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:07:59 np0005634532 nova_compute[257049]: 2026-03-01 10:07:59.549 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:07:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:07:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:07:59 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fc80034e0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:07:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:07:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:07:59.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:07:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:07:59 np0005634532 podman[264717]: 2026-03-01 10:07:59.947414933 +0000 UTC m=+0.037140858 container create 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 05:07:59 np0005634532 systemd[1]: Started libpod-conmon-9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921.scope.
Mar  1 05:08:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:08:00.020499638 +0000 UTC m=+0.110225553 container init 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:07:59.928744272 +0000 UTC m=+0.018470197 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:08:00.029184662 +0000 UTC m=+0.118910587 container start 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:08:00.032273869 +0000 UTC m=+0.121999814 container attach 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:08:00 np0005634532 thirsty_chaum[264733]: 167 167
Mar  1 05:08:00 np0005634532 systemd[1]: libpod-9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921.scope: Deactivated successfully.
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:08:00.034686778 +0000 UTC m=+0.124412693 container died 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 05:08:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-de2ac75949d9db2e55977a6270afb3f99cb07fa6b8a78332e18cc6c05a6ef544-merged.mount: Deactivated successfully.
Mar  1 05:08:00 np0005634532 podman[264717]: 2026-03-01 10:08:00.068216216 +0000 UTC m=+0.157942131 container remove 9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=thirsty_chaum, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 05:08:00 np0005634532 systemd[1]: libpod-conmon-9a0f60fd647e2c38be90110d335ec06e71b9d1de89ba16fa717f188fad74b921.scope: Deactivated successfully.
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.24328121 +0000 UTC m=+0.043923576 container create 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:08:00 np0005634532 systemd[1]: Started libpod-conmon-2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2.scope.
Mar  1 05:08:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:00 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.223274906 +0000 UTC m=+0.023917302 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.339608779 +0000 UTC m=+0.140251205 container init 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.345472423 +0000 UTC m=+0.146114809 container start 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.34937426 +0000 UTC m=+0.150016646 container attach 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:00 np0005634532 vigorous_vaughan[264774]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:08:00 np0005634532 vigorous_vaughan[264774]: --> All data devices are unavailable
Mar  1 05:08:00 np0005634532 systemd[1]: libpod-2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2.scope: Deactivated successfully.
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.689653182 +0000 UTC m=+0.490295548 container died 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 05:08:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-14baaaaa02ac7f795739b6c0c458b2d830d57208479caeea1f23e11e191652d4-merged.mount: Deactivated successfully.
Mar  1 05:08:00 np0005634532 podman[264758]: 2026-03-01 10:08:00.724115383 +0000 UTC m=+0.524757789 container remove 2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_vaughan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 05:08:00 np0005634532 systemd[1]: libpod-conmon-2477dd43cfe9f58267302a0a64da76bd0d0cf784a07b77a02b21c1738436a1b2.scope: Deactivated successfully.
Mar  1 05:08:00 np0005634532 nova_compute[257049]: 2026-03-01 10:08:00.918 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v766: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.26169848 +0000 UTC m=+0.043861565 container create b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:01 np0005634532 systemd[1]: Started libpod-conmon-b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e.scope.
Mar  1 05:08:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.326622373 +0000 UTC m=+0.108785518 container init b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.334073227 +0000 UTC m=+0.116236302 container start b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:08:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:08:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fa4001090 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.243372157 +0000 UTC m=+0.025535272 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.337677866 +0000 UTC m=+0.119840941 container attach b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:08:01 np0005634532 loving_merkle[264913]: 167 167
Mar  1 05:08:01 np0005634532 systemd[1]: libpod-b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e.scope: Deactivated successfully.
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.339184233 +0000 UTC m=+0.121347308 container died b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:08:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:08:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fd0009ad0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8c9b785affcde938cd81062f7fc1dd3c1ac4f649e6016976caea468475c9e083-merged.mount: Deactivated successfully.
Mar  1 05:08:01 np0005634532 podman[264896]: 2026-03-01 10:08:01.375457459 +0000 UTC m=+0.157620544 container remove b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_merkle, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 05:08:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:01.377 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:01 np0005634532 systemd[1]: libpod-conmon-b892a2eda911d0db6d631ce866b6fd984c63fb142ddca2dfbbbe9d8084d25d0e.scope: Deactivated successfully.
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.54272161 +0000 UTC m=+0.056109777 container create 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 05:08:01 np0005634532 systemd[1]: Started libpod-conmon-51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9.scope.
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.52007193 +0000 UTC m=+0.033460117 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fe8db64aed2dc351515ed2a1ebef6765589fe97a9e039e7f0d9f781227973f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fe8db64aed2dc351515ed2a1ebef6765589fe97a9e039e7f0d9f781227973f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fe8db64aed2dc351515ed2a1ebef6765589fe97a9e039e7f0d9f781227973f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47fe8db64aed2dc351515ed2a1ebef6765589fe97a9e039e7f0d9f781227973f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:01.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[263463]: 01/03/2026 10:08:01 : epoch 69a40fb0 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f0fac003c70 fd 38 proxy ignored for local
Mar  1 05:08:01 np0005634532 kernel: ganesha.nfsd[263618]: segfault at 50 ip 00007f1053aa232e sp 00007f0fb37fd210 error 4 in libntirpc.so.5.8[7f1053a87000+2c000] likely on CPU 5 (core 0, socket 5)
Mar  1 05:08:01 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.65207002 +0000 UTC m=+0.165458197 container init 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 05:08:01 np0005634532 systemd[1]: Started Process Core Dump (PID 264956/UID 0).
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.66176217 +0000 UTC m=+0.175150337 container start 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.666397894 +0000 UTC m=+0.179786071 container attach 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:08:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]: {
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:    "0": [
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:        {
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "devices": [
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "/dev/loop3"
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            ],
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "lv_name": "ceph_lv0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "lv_size": "21470642176",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "name": "ceph_lv0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "tags": {
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.cluster_name": "ceph",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.crush_device_class": "",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.encrypted": "0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.osd_id": "0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.type": "block",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.vdo": "0",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:                "ceph.with_tpm": "0"
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            },
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "type": "block",
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:            "vg_name": "ceph_vg0"
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:        }
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]:    ]
Mar  1 05:08:01 np0005634532 kind_wozniak[264953]: }
Mar  1 05:08:01 np0005634532 systemd[1]: libpod-51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9.scope: Deactivated successfully.
Mar  1 05:08:01 np0005634532 podman[264937]: 2026-03-01 10:08:01.978734548 +0000 UTC m=+0.492122715 container died 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:08:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-47fe8db64aed2dc351515ed2a1ebef6765589fe97a9e039e7f0d9f781227973f-merged.mount: Deactivated successfully.
Mar  1 05:08:02 np0005634532 podman[264937]: 2026-03-01 10:08:02.026530748 +0000 UTC m=+0.539918945 container remove 51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=kind_wozniak, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:08:02 np0005634532 systemd[1]: libpod-conmon-51192fcbf0c0916761f0386f4d31aa39680cf25f31612e1825ac255a820f90d9.scope: Deactivated successfully.
Mar  1 05:08:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:08:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:08:02 np0005634532 systemd-coredump[264957]: Process 263467 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 55:#012#0  0x00007f1053aa232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012#1  0x0000000000000000 n/a (n/a + 0x0)#012#2  0x00007f1053aac900 n/a (/usr/lib64/libntirpc.so.5.8 + 0x2c900)#012ELF object binary architecture: AMD x86-64
Mar  1 05:08:02 np0005634532 systemd[1]: systemd-coredump@9-264956-0.service: Deactivated successfully.
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.600345059 +0000 UTC m=+0.047267458 container create 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:08:02 np0005634532 systemd[1]: Started libpod-conmon-3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2.scope.
Mar  1 05:08:02 np0005634532 podman[265086]: 2026-03-01 10:08:02.655572773 +0000 UTC m=+0.032288009 container died 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 05:08:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.573853695 +0000 UTC m=+0.020775984 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.688893966 +0000 UTC m=+0.135816255 container init 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.693880559 +0000 UTC m=+0.140802828 container start 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:08:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-346658b102115a55acc4701e270d5e5f8a106c57b5226787cdf2e15787adccd9-merged.mount: Deactivated successfully.
Mar  1 05:08:02 np0005634532 focused_lamarr[265099]: 167 167
Mar  1 05:08:02 np0005634532 systemd[1]: libpod-3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2.scope: Deactivated successfully.
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.71783045 +0000 UTC m=+0.164752719 container attach 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.71821907 +0000 UTC m=+0.165141339 container died 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:08:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b2ebce44ae831f2f5de42fcda1fee74a066e788f8c9ebfb0f773fc6e53a78112-merged.mount: Deactivated successfully.
Mar  1 05:08:02 np0005634532 podman[265069]: 2026-03-01 10:08:02.809712409 +0000 UTC m=+0.256634678 container remove 3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_lamarr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:08:02 np0005634532 systemd[1]: libpod-conmon-3d0fb4a7a5bd031bd76e71ba8911a3252933041374afc15320bd1e873d05ead2.scope: Deactivated successfully.
Mar  1 05:08:02 np0005634532 podman[265086]: 2026-03-01 10:08:02.875311559 +0000 UTC m=+0.252026825 container remove 47b32316364fc7400e0e0cbcdbbab97fa4425c5273a7088f455c511c89e3f372 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:08:02 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 05:08:02 np0005634532 podman[265129]: 2026-03-01 10:08:02.955797097 +0000 UTC m=+0.059191403 container create dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:08:02 np0005634532 systemd[1]: Started libpod-conmon-dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9.scope.
Mar  1 05:08:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v767: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:02.928212086 +0000 UTC m=+0.031606482 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:03 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:08:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206df6fce03553a8f62f195cbb2f6e99953fd1a4a71a381e3527a3cdaf335726/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206df6fce03553a8f62f195cbb2f6e99953fd1a4a71a381e3527a3cdaf335726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206df6fce03553a8f62f195cbb2f6e99953fd1a4a71a381e3527a3cdaf335726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:03 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/206df6fce03553a8f62f195cbb2f6e99953fd1a4a71a381e3527a3cdaf335726/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:03.043275368 +0000 UTC m=+0.146669694 container init dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:03.04864944 +0000 UTC m=+0.152043756 container start dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:03.054151886 +0000 UTC m=+0.157546212 container attach dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:08:03 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 05:08:03 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.004s CPU time.
Mar  1 05:08:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:03.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:03.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:03 np0005634532 lvm[265248]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:08:03 np0005634532 lvm[265248]: VG ceph_vg0 finished
Mar  1 05:08:03 np0005634532 strange_goldwasser[265170]: {}
Mar  1 05:08:03 np0005634532 systemd[1]: libpod-dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9.scope: Deactivated successfully.
Mar  1 05:08:03 np0005634532 systemd[1]: libpod-dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9.scope: Consumed 1.051s CPU time.
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:03.787098687 +0000 UTC m=+0.890493023 container died dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:08:03 np0005634532 systemd[1]: var-lib-containers-storage-overlay-206df6fce03553a8f62f195cbb2f6e99953fd1a4a71a381e3527a3cdaf335726-merged.mount: Deactivated successfully.
Mar  1 05:08:03 np0005634532 podman[265129]: 2026-03-01 10:08:03.829912574 +0000 UTC m=+0.933306900 container remove dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:08:03 np0005634532 systemd[1]: libpod-conmon-dfb34fe89128f087cc6745930546ddc29738f1e69b5deebfaafa9870d995f0f9.scope: Deactivated successfully.
Mar  1 05:08:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:08:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:08:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:08:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:08:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:08:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:08:04 np0005634532 nova_compute[257049]: 2026-03-01 10:08:04.552 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v768: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 426 B/s rd, 0 op/s
Mar  1 05:08:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:05.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:05.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:05 np0005634532 nova_compute[257049]: 2026-03-01 10:08:05.922 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:06 np0005634532 podman[265292]: 2026-03-01 10:08:06.398829735 +0000 UTC m=+0.087382089 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible)
Mar  1 05:08:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v769: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:08:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:07] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Mar  1 05:08:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:07] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Mar  1 05:08:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:07.212Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:08:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:07.213Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:08:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:07.387 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100807 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:08:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:07.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v770: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 341 B/s rd, 0 op/s
Mar  1 05:08:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:09.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:09 np0005634532 nova_compute[257049]: 2026-03-01 10:08:09.590 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:09.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:10 np0005634532 podman[265323]: 2026-03-01 10:08:10.35076946 +0000 UTC m=+0.045963886 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Mar  1 05:08:10 np0005634532 nova_compute[257049]: 2026-03-01 10:08:10.925 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v771: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:08:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:11.393 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:11.643 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:12 np0005634532 systemd[1]: virtsecretd.service: Deactivated successfully.
Mar  1 05:08:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v772: 353 pgs: 353 active+clean; 41 MiB data, 264 MiB used, 60 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Mar  1 05:08:13 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 10.
Mar  1 05:08:13 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:08:13 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.004s CPU time.
Mar  1 05:08:13 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 05:08:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:13.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:13 np0005634532 podman[265395]: 2026-03-01 10:08:13.501279577 +0000 UTC m=+0.043600058 container create 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 05:08:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6bbefb7e30bc15aed74844614d79ec28ea529f33b3a62c2ec484b672803c21/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6bbefb7e30bc15aed74844614d79ec28ea529f33b3a62c2ec484b672803c21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6bbefb7e30bc15aed74844614d79ec28ea529f33b3a62c2ec484b672803c21/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc6bbefb7e30bc15aed74844614d79ec28ea529f33b3a62c2ec484b672803c21/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:08:13 np0005634532 podman[265395]: 2026-03-01 10:08:13.551311872 +0000 UTC m=+0.093632133 container init 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Mar  1 05:08:13 np0005634532 podman[265395]: 2026-03-01 10:08:13.556316946 +0000 UTC m=+0.098637187 container start 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:08:13 np0005634532 bash[265395]: 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c
Mar  1 05:08:13 np0005634532 podman[265395]: 2026-03-01 10:08:13.480908034 +0000 UTC m=+0.023228295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 05:08:13 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 05:08:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:13 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:08:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:13.645 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:14 np0005634532 nova_compute[257049]: 2026-03-01 10:08:14.592 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v773: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:08:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:15.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:15 np0005634532 nova_compute[257049]: 2026-03-01 10:08:15.928 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v774: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:08:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:17] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:17] "GET /metrics HTTP/1.1" 200 48442 "" "Prometheus/2.51.0"
Mar  1 05:08:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:17.214Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:08:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:17.214Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:08:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:17.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:08:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:17.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:08:17
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'vms', '.nfs', 'volumes']
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:08:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:08:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:17.649 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:08:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v775: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 87 op/s
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:08:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:08:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:19.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:19 np0005634532 nova_compute[257049]: 2026-03-01 10:08:19.647 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:19.651 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:19 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:08:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:19 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:08:20 np0005634532 nova_compute[257049]: 2026-03-01 10:08:20.969 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v776: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Mar  1 05:08:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:21.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:21.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v777: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Mar  1 05:08:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:23.408 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:23.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:23.881 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:08:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:23.881 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:08:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:23.881 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:08:24 np0005634532 nova_compute[257049]: 2026-03-01 10:08:24.648 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v778: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Mar  1 05:08:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:25.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:25.657 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 05:08:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:25 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 05:08:25 np0005634532 nova_compute[257049]: 2026-03-01 10:08:25.972 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v779: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Mar  1 05:08:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:27] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:08:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:27] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:08:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:27.215Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:08:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:27 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:27 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:27.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:27 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:27.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v780: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 124 op/s
Mar  1 05:08:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:29 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9494000b60 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:29 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490000fa0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:29.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100829 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:08:29 np0005634532 nova_compute[257049]: 2026-03-01 10:08:29.649 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:29 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:29.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:30 np0005634532 nova_compute[257049]: 2026-03-01 10:08:30.975 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v781: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 747 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Mar  1 05:08:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:31 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:31 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:31.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:31 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:31.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:08:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:08:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v782: 353 pgs: 353 active+clean; 113 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 747 KiB/s rd, 2.0 MiB/s wr, 65 op/s
Mar  1 05:08:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:33 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:33 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac000df0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:33.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:33 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:33.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:34 np0005634532 nova_compute[257049]: 2026-03-01 10:08:34.651 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v783: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 812 KiB/s rd, 2.1 MiB/s wr, 80 op/s
Mar  1 05:08:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:35 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:35 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:35.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:35 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:35.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:35 np0005634532 nova_compute[257049]: 2026-03-01 10:08:35.978 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:36 np0005634532 podman[265546]: 2026-03-01 10:08:36.607941396 +0000 UTC m=+0.080135738 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Mar  1 05:08:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:36 np0005634532 nova_compute[257049]: 2026-03-01 10:08:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v784: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 315 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:08:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:37.216Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:37.217Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:37 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94940016a0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:37 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490001ac0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:37.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:37 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:37.668 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:37 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:37.831 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:08:37 np0005634532 nova_compute[257049]: 2026-03-01 10:08:37.831 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:37 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:37.832 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:08:37 np0005634532 nova_compute[257049]: 2026-03-01 10:08:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:37 np0005634532 nova_compute[257049]: 2026-03-01 10:08:37.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:08:38 np0005634532 nova_compute[257049]: 2026-03-01 10:08:38.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:38 np0005634532 nova_compute[257049]: 2026-03-01 10:08:38.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v785: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:08:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:39 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac0091b0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:39 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9494002b10 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:39.430 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:39 np0005634532 nova_compute[257049]: 2026-03-01 10:08:39.653 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:39 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:39.670 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:39 np0005634532 nova_compute[257049]: 2026-03-01 10:08:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:39 np0005634532 nova_compute[257049]: 2026-03-01 10:08:39.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:40 np0005634532 nova_compute[257049]: 2026-03-01 10:08:40.973 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:40 np0005634532 nova_compute[257049]: 2026-03-01 10:08:40.981 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v786: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 103 KiB/s wr, 15 op/s
Mar  1 05:08:41 np0005634532 podman[265577]: 2026-03-01 10:08:41.368185443 +0000 UTC m=+0.059382755 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:08:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:41 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:41 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:41.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:41 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:41.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.996 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.997 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.997 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.997 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:08:41 np0005634532 nova_compute[257049]: 2026-03-01 10:08:41.997 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:08:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:08:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3365684639' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.413 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.549 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.550 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4641MB free_disk=59.942752838134766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.551 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.551 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.620 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.620 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:08:42 np0005634532 nova_compute[257049]: 2026-03-01 10:08:42.646 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:08:42 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:08:42.833 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:08:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:08:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/133084442' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:08:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v787: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 103 KiB/s wr, 15 op/s
Mar  1 05:08:43 np0005634532 nova_compute[257049]: 2026-03-01 10:08:43.041 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:08:43 np0005634532 nova_compute[257049]: 2026-03-01 10:08:43.046 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:08:43 np0005634532 nova_compute[257049]: 2026-03-01 10:08:43.060 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:08:43 np0005634532 nova_compute[257049]: 2026-03-01 10:08:43.062 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:08:43 np0005634532 nova_compute[257049]: 2026-03-01 10:08:43.062 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.511s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:08:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:43 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:43 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:43.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:43 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:43.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.062 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.063 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.063 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.075 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.655 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:44 np0005634532 nova_compute[257049]: 2026-03-01 10:08:44.985 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:08:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v788: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 71 KiB/s rd, 106 KiB/s wr, 15 op/s
Mar  1 05:08:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:45 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:45 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9494003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:45.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:45 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:45.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:45 np0005634532 nova_compute[257049]: 2026-03-01 10:08:45.983 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v789: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 5.6 KiB/s rd, 16 KiB/s wr, 1 op/s
Mar  1 05:08:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:08:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:47.218Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:08:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:47 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:47 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:47.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:08:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:47 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9494003430 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:47.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:08:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:08:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v790: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Mar  1 05:08:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:49 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94ac009ec0 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:49 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f9490002f50 fd 38 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:08:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:49.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:49 np0005634532 nova_compute[257049]: 2026-03-01 10:08:49.657 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:49 np0005634532 kernel: ganesha.nfsd[265497]: segfault at 50 ip 00007f9535bca32e sp 00007f94abffe210 error 4 in libntirpc.so.5.8[7f9535baf000+2c000] likely on CPU 2 (core 0, socket 2)
Mar  1 05:08:49 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 05:08:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265411]: 01/03/2026 10:08:49 : epoch 69a4100d : compute-0 : ganesha.nfsd-2[svc_8] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f94a4001c00 fd 38 proxy ignored for local
Mar  1 05:08:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:49.680 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:49 np0005634532 systemd[1]: Started Process Core Dump (PID 265648/UID 0).
Mar  1 05:08:50 np0005634532 systemd-coredump[265649]: Process 265415 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 43:#012#0  0x00007f9535bca32e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 05:08:50 np0005634532 systemd[1]: systemd-coredump@10-265648-0.service: Deactivated successfully.
Mar  1 05:08:50 np0005634532 podman[265656]: 2026-03-01 10:08:50.738602776 +0000 UTC m=+0.027437978 container died 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 05:08:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-dc6bbefb7e30bc15aed74844614d79ec28ea529f33b3a62c2ec484b672803c21-merged.mount: Deactivated successfully.
Mar  1 05:08:50 np0005634532 podman[265656]: 2026-03-01 10:08:50.775901746 +0000 UTC m=+0.064736948 container remove 5eb33409510ff5e74ba5a928f75a0751a4610456e015a34f39b9193b684f7b0c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:08:50 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 05:08:50 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 05:08:50 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.040s CPU time.
Mar  1 05:08:50 np0005634532 nova_compute[257049]: 2026-03-01 10:08:50.986 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v791: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:08:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:51.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:51.682 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v792: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:08:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:53.451 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:53.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:54 np0005634532 nova_compute[257049]: 2026-03-01 10:08:54.659 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v793: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:08:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:08:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:55.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:08:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100855 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:08:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:08:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:55.686 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:08:55 np0005634532 nova_compute[257049]: 2026-03-01 10:08:55.988 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:08:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v794: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:08:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:57] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:08:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:08:57] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:08:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:08:57.219Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:08:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:57.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:57.688 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:08:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/645425930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:08:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:08:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/645425930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:08:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v795: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Mar  1 05:08:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:08:59.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:08:59 np0005634532 nova_compute[257049]: 2026-03-01 10:08:59.661 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:08:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:08:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:08:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:08:59.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:00 np0005634532 nova_compute[257049]: 2026-03-01 10:09:00.991 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:01 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 11.
Mar  1 05:09:01 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:09:01 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.040s CPU time.
Mar  1 05:09:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v796: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:09:01 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 05:09:01 np0005634532 podman[265787]: 2026-03-01 10:09:01.225946167 +0000 UTC m=+0.033471637 container create 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3681b728a3c95b4a20b00ebe7973f602cf922460fc11a689337dba4eb215a8c1/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3681b728a3c95b4a20b00ebe7973f602cf922460fc11a689337dba4eb215a8c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3681b728a3c95b4a20b00ebe7973f602cf922460fc11a689337dba4eb215a8c1/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3681b728a3c95b4a20b00ebe7973f602cf922460fc11a689337dba4eb215a8c1/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:01 np0005634532 podman[265787]: 2026-03-01 10:09:01.279824355 +0000 UTC m=+0.087349825 container init 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 05:09:01 np0005634532 podman[265787]: 2026-03-01 10:09:01.284872269 +0000 UTC m=+0.092397739 container start 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 05:09:01 np0005634532 bash[265787]: 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd
Mar  1 05:09:01 np0005634532 podman[265787]: 2026-03-01 10:09:01.211957322 +0000 UTC m=+0.019482792 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:01 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 05:09:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:09:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:01.463 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:01.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.221787) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742221818, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 844, "num_deletes": 251, "total_data_size": 1256360, "memory_usage": 1279712, "flush_reason": "Manual Compaction"}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742229862, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 1242764, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23885, "largest_seqno": 24728, "table_properties": {"data_size": 1238590, "index_size": 1826, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9593, "raw_average_key_size": 19, "raw_value_size": 1230172, "raw_average_value_size": 2526, "num_data_blocks": 81, "num_entries": 487, "num_filter_entries": 487, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359677, "oldest_key_time": 1772359677, "file_creation_time": 1772359742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 8128 microseconds, and 3386 cpu microseconds.
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.229909) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 1242764 bytes OK
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.229930) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.231287) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.231304) EVENT_LOG_v1 {"time_micros": 1772359742231299, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.231320) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1252277, prev total WAL file size 1252277, number of live WAL files 2.
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.231743) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(1213KB)], [53(12MB)]
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742231781, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 14856493, "oldest_snapshot_seqno": -1}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 5359 keys, 12733729 bytes, temperature: kUnknown
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742270590, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 12733729, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12698524, "index_size": 20695, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13445, "raw_key_size": 137729, "raw_average_key_size": 25, "raw_value_size": 12602101, "raw_average_value_size": 2351, "num_data_blocks": 840, "num_entries": 5359, "num_filter_entries": 5359, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.270800) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 12733729 bytes
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.272166) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 382.2 rd, 327.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.2, 13.0 +0.0 blob) out(12.1 +0.0 blob), read-write-amplify(22.2) write-amplify(10.2) OK, records in: 5875, records dropped: 516 output_compression: NoCompression
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.272188) EVENT_LOG_v1 {"time_micros": 1772359742272177, "job": 28, "event": "compaction_finished", "compaction_time_micros": 38872, "compaction_time_cpu_micros": 19199, "output_level": 6, "num_output_files": 1, "total_output_size": 12733729, "num_input_records": 5875, "num_output_records": 5359, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742272420, "job": 28, "event": "table_file_deletion", "file_number": 55}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359742273756, "job": 28, "event": "table_file_deletion", "file_number": 53}
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.231666) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.273782) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.273787) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.273789) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.273791) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:09:02.273793) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:09:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:09:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v797: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:09:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:03.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:03.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:04 np0005634532 nova_compute[257049]: 2026-03-01 10:09:04.663 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:09:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:09:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v798: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 75 op/s
Mar  1 05:09:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:09:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:05 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.354116922 +0000 UTC m=+0.039308341 container create cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:09:05 np0005634532 systemd[1]: Started libpod-conmon-cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d.scope.
Mar  1 05:09:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.33904858 +0000 UTC m=+0.024240029 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.435240143 +0000 UTC m=+0.120431652 container init cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.442081362 +0000 UTC m=+0.127272781 container start cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.445164488 +0000 UTC m=+0.130355997 container attach cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:09:05 np0005634532 intelligent_galileo[266038]: 167 167
Mar  1 05:09:05 np0005634532 systemd[1]: libpod-cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d.scope: Deactivated successfully.
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.447187087 +0000 UTC m=+0.132378546 container died cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:09:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:05.469 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d27aae2262a97dd7b80a4586ed748809ecf06598efd7764e6300e7901f1269e8-merged.mount: Deactivated successfully.
Mar  1 05:09:05 np0005634532 podman[266022]: 2026-03-01 10:09:05.488979558 +0000 UTC m=+0.174171017 container remove cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_galileo, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 05:09:05 np0005634532 systemd[1]: libpod-conmon-cf683e7377e4810a8afe07e6123aedbdee4fd061cf7915186618d4645bb7073d.scope: Deactivated successfully.
Mar  1 05:09:05 np0005634532 podman[266064]: 2026-03-01 10:09:05.652992054 +0000 UTC m=+0.057141241 container create 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:09:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:05.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:05 np0005634532 systemd[1]: Started libpod-conmon-5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122.scope.
Mar  1 05:09:05 np0005634532 podman[266064]: 2026-03-01 10:09:05.631512834 +0000 UTC m=+0.035662121 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:05 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:05 np0005634532 podman[266064]: 2026-03-01 10:09:05.747091295 +0000 UTC m=+0.151240502 container init 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:09:05 np0005634532 podman[266064]: 2026-03-01 10:09:05.752938659 +0000 UTC m=+0.157087856 container start 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:09:05 np0005634532 podman[266064]: 2026-03-01 10:09:05.756503527 +0000 UTC m=+0.160652754 container attach 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 05:09:05 np0005634532 nova_compute[257049]: 2026-03-01 10:09:05.993 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:06 np0005634532 priceless_bell[266081]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:09:06 np0005634532 priceless_bell[266081]: --> All data devices are unavailable
Mar  1 05:09:06 np0005634532 systemd[1]: libpod-5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122.scope: Deactivated successfully.
Mar  1 05:09:06 np0005634532 conmon[266081]: conmon 5c515bccb3492aae8dec <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122.scope/container/memory.events
Mar  1 05:09:06 np0005634532 podman[266064]: 2026-03-01 10:09:06.040654926 +0000 UTC m=+0.444804113 container died 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2)
Mar  1 05:09:06 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c9043aad7aceff119cfc661154ad86c74e3c85a370db28879616371b48ff2da1-merged.mount: Deactivated successfully.
Mar  1 05:09:06 np0005634532 podman[266064]: 2026-03-01 10:09:06.077787642 +0000 UTC m=+0.481936829 container remove 5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 05:09:06 np0005634532 systemd[1]: libpod-conmon-5c515bccb3492aae8dec051deaa142d03f55f214b981302c490adcbcb450e122.scope: Deactivated successfully.
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.587222948 +0000 UTC m=+0.050530657 container create 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:06 np0005634532 systemd[1]: Started libpod-conmon-3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037.scope.
Mar  1 05:09:06 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.561947225 +0000 UTC m=+0.025255024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.65784697 +0000 UTC m=+0.121154709 container init 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.664711499 +0000 UTC m=+0.128019238 container start 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 05:09:06 np0005634532 jovial_davinci[266218]: 167 167
Mar  1 05:09:06 np0005634532 systemd[1]: libpod-3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037.scope: Deactivated successfully.
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.668920923 +0000 UTC m=+0.132228722 container attach 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True)
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.670788789 +0000 UTC m=+0.134096528 container died 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:09:06 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f995cdddcb5195fbb12049258a0b9d38f4018e1f20dbbd1e333542a51d89eae0-merged.mount: Deactivated successfully.
Mar  1 05:09:06 np0005634532 podman[266201]: 2026-03-01 10:09:06.710613522 +0000 UTC m=+0.173921231 container remove 3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jovial_davinci, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:09:06 np0005634532 systemd[1]: libpod-conmon-3269443e50578a36649a47bbb466f4a16419beae7932cbb1960d85244c45a037.scope: Deactivated successfully.
Mar  1 05:09:06 np0005634532 podman[266220]: 2026-03-01 10:09:06.74825198 +0000 UTC m=+0.102294964 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true)
Mar  1 05:09:06 np0005634532 podman[266267]: 2026-03-01 10:09:06.885583088 +0000 UTC m=+0.054100456 container create 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:09:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:06 np0005634532 systemd[1]: Started libpod-conmon-99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e.scope.
Mar  1 05:09:06 np0005634532 podman[266267]: 2026-03-01 10:09:06.859765901 +0000 UTC m=+0.028283359 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:06 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552c85653736dbf86e3ead9edcfcfe35f208a504b8e61493fc544d7602bf4524/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552c85653736dbf86e3ead9edcfcfe35f208a504b8e61493fc544d7602bf4524/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552c85653736dbf86e3ead9edcfcfe35f208a504b8e61493fc544d7602bf4524/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/552c85653736dbf86e3ead9edcfcfe35f208a504b8e61493fc544d7602bf4524/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:06 np0005634532 podman[266267]: 2026-03-01 10:09:06.985731318 +0000 UTC m=+0.154248716 container init 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:06 np0005634532 podman[266267]: 2026-03-01 10:09:06.995236822 +0000 UTC m=+0.163754180 container start 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 05:09:07 np0005634532 podman[266267]: 2026-03-01 10:09:07.00001184 +0000 UTC m=+0.168529218 container attach 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 05:09:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v799: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.4 KiB/s wr, 70 op/s
Mar  1 05:09:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:07] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:07] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:07.220Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]: {
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:    "0": [
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:        {
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "devices": [
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "/dev/loop3"
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            ],
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "lv_name": "ceph_lv0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "lv_size": "21470642176",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "name": "ceph_lv0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "tags": {
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.cluster_name": "ceph",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.crush_device_class": "",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.encrypted": "0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.osd_id": "0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.type": "block",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.vdo": "0",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:                "ceph.with_tpm": "0"
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            },
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "type": "block",
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:            "vg_name": "ceph_vg0"
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:        }
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]:    ]
Mar  1 05:09:07 np0005634532 sweet_cannon[266283]: }
Mar  1 05:09:07 np0005634532 systemd[1]: libpod-99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e.scope: Deactivated successfully.
Mar  1 05:09:07 np0005634532 podman[266267]: 2026-03-01 10:09:07.270125993 +0000 UTC m=+0.438643411 container died 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:09:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-552c85653736dbf86e3ead9edcfcfe35f208a504b8e61493fc544d7602bf4524-merged.mount: Deactivated successfully.
Mar  1 05:09:07 np0005634532 podman[266267]: 2026-03-01 10:09:07.321132011 +0000 UTC m=+0.489649379 container remove 99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:09:07 np0005634532 systemd[1]: libpod-conmon-99c94056d2171c24056440f698ad054b7e3cadc436e1c6c1b6fc9d3d6e0db24e.scope: Deactivated successfully.
Mar  1 05:09:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:09:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:09:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:07.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:07.698 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:07 np0005634532 podman[266397]: 2026-03-01 10:09:07.917151763 +0000 UTC m=+0.058467324 container create e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 05:09:07 np0005634532 systemd[1]: Started libpod-conmon-e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7.scope.
Mar  1 05:09:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:07 np0005634532 podman[266397]: 2026-03-01 10:09:07.893666323 +0000 UTC m=+0.034981944 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:07 np0005634532 podman[266397]: 2026-03-01 10:09:07.991918087 +0000 UTC m=+0.133233668 container init e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:09:07 np0005634532 podman[266397]: 2026-03-01 10:09:07.996483799 +0000 UTC m=+0.137799340 container start e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:09:08 np0005634532 podman[266397]: 2026-03-01 10:09:08.000060758 +0000 UTC m=+0.141376359 container attach e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:09:08 np0005634532 wizardly_dhawan[266413]: 167 167
Mar  1 05:09:08 np0005634532 systemd[1]: libpod-e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7.scope: Deactivated successfully.
Mar  1 05:09:08 np0005634532 conmon[266413]: conmon e447ccc1448cd5a00e70 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7.scope/container/memory.events
Mar  1 05:09:08 np0005634532 podman[266397]: 2026-03-01 10:09:08.002023866 +0000 UTC m=+0.143339417 container died e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:08 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2ed9132986f7814c8c50481cdc5c0af6e9344cb1edb28974655af70f269aa1e4-merged.mount: Deactivated successfully.
Mar  1 05:09:08 np0005634532 podman[266397]: 2026-03-01 10:09:08.033627036 +0000 UTC m=+0.174942587 container remove e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wizardly_dhawan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:09:08 np0005634532 systemd[1]: libpod-conmon-e447ccc1448cd5a00e7062c3bec89f28b8ed661748c4338cc4ba0d51c86c35d7.scope: Deactivated successfully.
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.157273326 +0000 UTC m=+0.043415252 container create bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 05:09:08 np0005634532 systemd[1]: Started libpod-conmon-bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8.scope.
Mar  1 05:09:08 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:09:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d63a9a8cbe9e40097734ecaabca3e085cb244fed81b33671314de2a25c90be/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d63a9a8cbe9e40097734ecaabca3e085cb244fed81b33671314de2a25c90be/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d63a9a8cbe9e40097734ecaabca3e085cb244fed81b33671314de2a25c90be/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:08 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8d63a9a8cbe9e40097734ecaabca3e085cb244fed81b33671314de2a25c90be/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.226981685 +0000 UTC m=+0.113123631 container init bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.234275015 +0000 UTC m=+0.120416941 container start bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.139073107 +0000 UTC m=+0.025215083 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.23812743 +0000 UTC m=+0.124269406 container attach bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:09:08 np0005634532 lvm[266526]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:09:08 np0005634532 lvm[266526]: VG ceph_vg0 finished
Mar  1 05:09:08 np0005634532 optimistic_mclaren[266452]: {}
Mar  1 05:09:08 np0005634532 systemd[1]: libpod-bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8.scope: Deactivated successfully.
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.917188059 +0000 UTC m=+0.803329985 container died bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:09:08 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e8d63a9a8cbe9e40097734ecaabca3e085cb244fed81b33671314de2a25c90be-merged.mount: Deactivated successfully.
Mar  1 05:09:08 np0005634532 podman[266435]: 2026-03-01 10:09:08.954496009 +0000 UTC m=+0.840637945 container remove bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_mclaren, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 05:09:08 np0005634532 systemd[1]: libpod-conmon-bf329fe1bb117f0930e2f243ddeab1e9ba8fea8e897be3c36b2a51e84d6f88b8.scope: Deactivated successfully.
Mar  1 05:09:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:09:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:09:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v800: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 133 op/s
Mar  1 05:09:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:09.475 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:09 np0005634532 nova_compute[257049]: 2026-03-01 10:09:09.697 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:09.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:10 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:10 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:09:10 np0005634532 nova_compute[257049]: 2026-03-01 10:09:10.997 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v801: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Mar  1 05:09:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:11.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:11.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:12 np0005634532 podman[266571]: 2026-03-01 10:09:12.391203179 +0000 UTC m=+0.072132770 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 05:09:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v802: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 296 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:09:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:13.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838000df0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:13.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:14 np0005634532 nova_compute[257049]: 2026-03-01 10:09:14.697 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v803: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 297 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:09:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:15 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:15 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300014d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:15.485 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100915 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:09:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:15 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c000b60 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:15.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:16 np0005634532 nova_compute[257049]: 2026-03-01 10:09:15.999 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v804: 353 pgs: 353 active+clean; 200 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 293 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:17] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:17] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:17.221Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:17 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818000fa0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:17 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814000d00 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:17.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:09:17
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['images', 'backups', 'volumes', 'default.rgw.meta', '.nfs', '.mgr', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'vms', 'default.rgw.log']
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:09:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:09:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:17 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:17.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:09:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v805: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 119 op/s
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:09:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:09:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100919 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:09:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:19 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:19 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:19.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:19 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:19.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:19 np0005634532 nova_compute[257049]: 2026-03-01 10:09:19.756 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:21 np0005634532 nova_compute[257049]: 2026-03-01 10:09:21.002 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v806: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Mar  1 05:09:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:21 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:21 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:21.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:21 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:21.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v807: 353 pgs: 353 active+clean; 41 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Mar  1 05:09:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:23 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:23 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:23.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:23 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c0016a0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:23.717 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:23.882 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:09:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:23.882 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:09:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:23.882 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:09:24 np0005634532 nova_compute[257049]: 2026-03-01 10:09:24.758 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v808: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 15 KiB/s wr, 56 op/s
Mar  1 05:09:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:25 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818001ac0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:25 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814001820 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:25.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:25 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:25.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:26 np0005634532 nova_compute[257049]: 2026-03-01 10:09:26.005 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v809: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:09:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:27.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:27 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:27 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:27.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:27 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:27 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:09:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:27.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v810: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.9 KiB/s wr, 57 op/s
Mar  1 05:09:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:29 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:29 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:29.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:29 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c002b10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:29.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:29 np0005634532 nova_compute[257049]: 2026-03-01 10:09:29.760 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:30 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:09:30 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:30 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:09:31 np0005634532 nova_compute[257049]: 2026-03-01 10:09:31.008 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v811: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 05:09:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:31 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:31 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:31.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:31 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb814002cb0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:31.727 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:09:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:09:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v812: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.4 KiB/s rd, 597 B/s wr, 2 op/s
Mar  1 05:09:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:33 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:33 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818002f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:33.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:33 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:33.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:33 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:09:34 np0005634532 nova_compute[257049]: 2026-03-01 10:09:34.813 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v813: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 05:09:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:35 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:35 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:35.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:35 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:35.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:36 np0005634532 nova_compute[257049]: 2026-03-01 10:09:36.010 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:36 np0005634532 podman[266678]: 2026-03-01 10:09:36.885811067 +0000 UTC m=+0.068716886 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Mar  1 05:09:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v814: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 1023 B/s wr, 3 op/s
Mar  1 05:09:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:37] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Mar  1 05:09:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:37] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Mar  1 05:09:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:37.223Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:37 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:37.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:37 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:37 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:37.733 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:37 np0005634532 nova_compute[257049]: 2026-03-01 10:09:37.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:37 np0005634532 nova_compute[257049]: 2026-03-01 10:09:37.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:37 np0005634532 nova_compute[257049]: 2026-03-01 10:09:37.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:09:38 np0005634532 nova_compute[257049]: 2026-03-01 10:09:38.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v815: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Mar  1 05:09:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:39 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:39 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:39.442 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:09:39 np0005634532 nova_compute[257049]: 2026-03-01 10:09:39.443 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:39 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:39.444 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:09:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:39.524 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100939 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 0ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:09:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:39 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8300025c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:39 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:39.735 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:39 np0005634532 nova_compute[257049]: 2026-03-01 10:09:39.816 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:39 np0005634532 nova_compute[257049]: 2026-03-01 10:09:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:39 np0005634532 nova_compute[257049]: 2026-03-01 10:09:39.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:40 np0005634532 nova_compute[257049]: 2026-03-01 10:09:40.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:41 np0005634532 nova_compute[257049]: 2026-03-01 10:09:41.013 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v816: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Mar  1 05:09:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:41 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:41.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:41 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:41 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:41.738 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:42 np0005634532 nova_compute[257049]: 2026-03-01 10:09:42.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:42 np0005634532 nova_compute[257049]: 2026-03-01 10:09:42.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.000 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.000 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.000 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.001 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.001 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:09:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v817: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Mar  1 05:09:43 np0005634532 podman[266732]: 2026-03-01 10:09:43.362757168 +0000 UTC m=+0.049086931 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Mar  1 05:09:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:43 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:43 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:09:43.447 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:09:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:09:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605491612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.487 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:09:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:43.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.612 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.614 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4584MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.614 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.614 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.665 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.665 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:09:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:43 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:43 np0005634532 nova_compute[257049]: 2026-03-01 10:09:43.684 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:09:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:43 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:43.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:09:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/79250' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.176 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.180 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.196 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.198 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.198 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:09:44 np0005634532 nova_compute[257049]: 2026-03-01 10:09:44.844 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v818: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 34 op/s
Mar  1 05:09:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:45 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:45.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:45 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:45 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:45.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:46 np0005634532 nova_compute[257049]: 2026-03-01 10:09:46.016 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:46 np0005634532 nova_compute[257049]: 2026-03-01 10:09:46.199 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:09:46 np0005634532 nova_compute[257049]: 2026-03-01 10:09:46.200 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:09:46 np0005634532 nova_compute[257049]: 2026-03-01 10:09:46.200 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:09:46 np0005634532 nova_compute[257049]: 2026-03-01 10:09:46.215 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:09:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:47] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:47] "GET /metrics HTTP/1.1" 200 48444 "" "Prometheus/2.51.0"
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v819: 353 pgs: 353 active+clean; 88 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:47.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:47.224Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:47.225Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:47 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:09:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:09:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:47.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:47 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:09:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:09:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:47 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:47.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v820: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Mar  1 05:09:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:49 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:49.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:49 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:49 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810000d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:49.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:49 np0005634532 nova_compute[257049]: 2026-03-01 10:09:49.847 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:51 np0005634532 nova_compute[257049]: 2026-03-01 10:09:51.019 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v821: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:09:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:51 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:09:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:51.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:09:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/100951 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:09:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:51 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003c10 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:51 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:51.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v822: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:09:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:53 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8100021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:53.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:53 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8100021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:53 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838002010 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:53.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:54 np0005634532 nova_compute[257049]: 2026-03-01 10:09:54.848 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v823: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Mar  1 05:09:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:55 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:55.548 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:55 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8100021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:55 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:55.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:56 np0005634532 nova_compute[257049]: 2026-03-01 10:09:56.021 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:09:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:09:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:57] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:09:57] "GET /metrics HTTP/1.1" 200 48465 "" "Prometheus/2.51.0"
Mar  1 05:09:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v824: 353 pgs: 353 active+clean; 88 MiB data, 277 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 68 op/s
Mar  1 05:09:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:09:57.226Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:09:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:57 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000050s ======
Mar  1 05:09:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:57.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000050s
Mar  1 05:09:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:57 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:57 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8100021d0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:09:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:57.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:09:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:09:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2373484930' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:09:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:09:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2373484930' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:09:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v825: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Mar  1 05:09:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:59 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:09:59.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:59 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838008dc0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:09:59 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:09:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:09:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:09:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:09:59.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:09:59 np0005634532 nova_compute[257049]: 2026-03-01 10:09:59.850 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [INF] : overall HEALTH_OK
Mar  1 05:10:00 np0005634532 ceph-mon[75825]: overall HEALTH_OK
Mar  1 05:10:01 np0005634532 nova_compute[257049]: 2026-03-01 10:10:01.024 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v826: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:10:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:10:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:01.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:01 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003d90 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:01.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:10:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:10:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v827: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 331 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:10:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:03 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:03.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:03 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:03 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810003910 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:03.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:04 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:10:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:04 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:10:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:04 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:10:04 np0005634532 nova_compute[257049]: 2026-03-01 10:10:04.853 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v828: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Mar  1 05:10:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:05 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb838009ad0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:05.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:05 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:05 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810003910 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:05.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:06 np0005634532 nova_compute[257049]: 2026-03-01 10:10:06.026 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:07] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Mar  1 05:10:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:07] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Mar  1 05:10:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v829: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:07.227Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:07 np0005634532 podman[266833]: 2026-03-01 10:10:07.407720902 +0000 UTC m=+0.095251410 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:10:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:07.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:07 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb810003910 fd 49 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:07.765 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v830: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 334 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Mar  1 05:10:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:09 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003f30 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:09.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:09 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb83800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:09 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:09.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:09 np0005634532 nova_compute[257049]: 2026-03-01 10:10:09.894 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:09 np0005634532 podman[266984]: 2026-03-01 10:10:09.946512204 +0000 UTC m=+0.096060680 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:10:10 np0005634532 podman[266984]: 2026-03-01 10:10:10.047541186 +0000 UTC m=+0.197089702 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:10:10 np0005634532 podman[267122]: 2026-03-01 10:10:10.533075263 +0000 UTC m=+0.055561842 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:10 np0005634532 podman[267122]: 2026-03-01 10:10:10.543431708 +0000 UTC m=+0.065918287 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:10 np0005634532 podman[267195]: 2026-03-01 10:10:10.790581575 +0000 UTC m=+0.068728887 container exec 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:10:10 np0005634532 podman[267195]: 2026-03-01 10:10:10.806313613 +0000 UTC m=+0.084460895 container exec_died 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:10:11 np0005634532 nova_compute[257049]: 2026-03-01 10:10:11.028 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:11 np0005634532 podman[267261]: 2026-03-01 10:10:11.062728748 +0000 UTC m=+0.061903738 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 05:10:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v831: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 17 KiB/s wr, 3 op/s
Mar  1 05:10:11 np0005634532 podman[267261]: 2026-03-01 10:10:11.091352794 +0000 UTC m=+0.090527754 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 05:10:11 np0005634532 podman[267328]: 2026-03-01 10:10:11.321778217 +0000 UTC m=+0.063034426 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, io.openshift.expose-services=, version=2.2.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=keepalived for Ceph, distribution-scope=public, io.buildah.version=1.28.2, architecture=x86_64, vendor=Red Hat, Inc., summary=Provides keepalived on RHEL 9 for Ceph., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.component=keepalived-container, release=1793)
Mar  1 05:10:11 np0005634532 podman[267328]: 2026-03-01 10:10:11.368683934 +0000 UTC m=+0.109940153 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, version=2.2.4, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, name=keepalived, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, vendor=Red Hat, Inc., io.openshift.expose-services=, description=keepalived for Ceph, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, vcs-type=git, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, architecture=x86_64, io.openshift.tags=Ceph keepalived, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1793, build-date=2023-02-22T09:23:20, io.k8s.display-name=Keepalived on RHEL 9, distribution-scope=public, summary=Provides keepalived on RHEL 9 for Ceph.)
Mar  1 05:10:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:11 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:11.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:11 np0005634532 podman[267394]: 2026-03-01 10:10:11.602589244 +0000 UTC m=+0.056098495 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:11 np0005634532 podman[267394]: 2026-03-01 10:10:11.62839112 +0000 UTC m=+0.081900341 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:11 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003f50 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:11 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb83800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:11.769 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:11 np0005634532 podman[267466]: 2026-03-01 10:10:11.813872776 +0000 UTC m=+0.051898912 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 05:10:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:12 np0005634532 podman[267466]: 2026-03-01 10:10:12.047787785 +0000 UTC m=+0.285813941 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 05:10:12 np0005634532 podman[267582]: 2026-03-01 10:10:12.455966354 +0000 UTC m=+0.078611280 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:12 np0005634532 podman[267582]: 2026-03-01 10:10:12.497958519 +0000 UTC m=+0.120603395 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:10:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:10:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:10:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v832: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 17 KiB/s wr, 3 op/s
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:13.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:13 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.682930737 +0000 UTC m=+0.034482051 container create 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:10:13 np0005634532 systemd[1]: Started libpod-conmon-637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc.scope.
Mar  1 05:10:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_5] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb8140039c0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101013 (4) : Server backend/nfs.cephfs.0 is UP, reason: Layer4 check passed, check duration: 1ms. 3 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:10:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:13 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb80c003f70 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.724539484 +0000 UTC m=+0.076090798 container init 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.731927216 +0000 UTC m=+0.083478530 container start 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.73533583 +0000 UTC m=+0.086887164 container attach 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:10:13 np0005634532 zealous_williams[267817]: 167 167
Mar  1 05:10:13 np0005634532 systemd[1]: libpod-637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc.scope: Deactivated successfully.
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.736290994 +0000 UTC m=+0.087842308 container died 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 05:10:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4d33c89d0b28f563f2d5f652e987de65d258aa865f4d8bd3a515dd35aef53ea6-merged.mount: Deactivated successfully.
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.668200834 +0000 UTC m=+0.019752168 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:13 np0005634532 podman[267800]: 2026-03-01 10:10:13.770775274 +0000 UTC m=+0.122326588 container remove 637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:10:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:13.771 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:13 np0005634532 systemd[1]: libpod-conmon-637e297a27a23011e38f8ce2ef408e0fdb1dfea631bd401ceec697877f93a2bc.scope: Deactivated successfully.
Mar  1 05:10:13 np0005634532 podman[267814]: 2026-03-01 10:10:13.779729185 +0000 UTC m=+0.071185437 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Mar  1 05:10:13 np0005634532 podman[267859]: 2026-03-01 10:10:13.880575353 +0000 UTC m=+0.032243097 container create 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:10:13 np0005634532 systemd[1]: Started libpod-conmon-1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0.scope.
Mar  1 05:10:13 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:13 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:13 np0005634532 podman[267859]: 2026-03-01 10:10:13.947295428 +0000 UTC m=+0.098963182 container init 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:10:13 np0005634532 podman[267859]: 2026-03-01 10:10:13.95344126 +0000 UTC m=+0.105109004 container start 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 05:10:13 np0005634532 podman[267859]: 2026-03-01 10:10:13.956403063 +0000 UTC m=+0.108070827 container attach 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:10:13 np0005634532 podman[267859]: 2026-03-01 10:10:13.866169787 +0000 UTC m=+0.017837551 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:14 np0005634532 nifty_mayer[267875]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:10:14 np0005634532 nifty_mayer[267875]: --> All data devices are unavailable
Mar  1 05:10:14 np0005634532 systemd[1]: libpod-1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0.scope: Deactivated successfully.
Mar  1 05:10:14 np0005634532 podman[267859]: 2026-03-01 10:10:14.238150433 +0000 UTC m=+0.389818177 container died 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:10:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-5829fa46dfcf72d30ce43d032a8eed91098d4f629ec8e0ab0d14621d5405766b-merged.mount: Deactivated successfully.
Mar  1 05:10:14 np0005634532 podman[267859]: 2026-03-01 10:10:14.27291312 +0000 UTC m=+0.424580864 container remove 1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_mayer, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:10:14 np0005634532 systemd[1]: libpod-conmon-1ed092b065b5e7da8ff45f6ab6956b303cc02273ff18dc8851cb996113a292a0.scope: Deactivated successfully.
Mar  1 05:10:14 np0005634532 podman[267995]: 2026-03-01 10:10:14.678767201 +0000 UTC m=+0.028222267 container create 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:10:14 np0005634532 systemd[1]: Started libpod-conmon-545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33.scope.
Mar  1 05:10:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:14 np0005634532 podman[267995]: 2026-03-01 10:10:14.734549347 +0000 UTC m=+0.084004413 container init 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:10:14 np0005634532 podman[267995]: 2026-03-01 10:10:14.739862438 +0000 UTC m=+0.089317504 container start 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:10:14 np0005634532 focused_khayyam[268011]: 167 167
Mar  1 05:10:14 np0005634532 systemd[1]: libpod-545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33.scope: Deactivated successfully.
Mar  1 05:10:14 np0005634532 conmon[268011]: conmon 545bf70f314baffc695c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33.scope/container/memory.events
Mar  1 05:10:14 np0005634532 podman[267995]: 2026-03-01 10:10:14.743519488 +0000 UTC m=+0.092974574 container attach 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:10:14 np0005634532 podman[267995]: 2026-03-01 10:10:14.665950165 +0000 UTC m=+0.015405251 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:14 np0005634532 podman[268016]: 2026-03-01 10:10:14.77481958 +0000 UTC m=+0.020461275 container died 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 05:10:14 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9fcebd7743811aa95eca4cee0d227486ca9cdd7ba385d5c46cab0e20adba41aa-merged.mount: Deactivated successfully.
Mar  1 05:10:14 np0005634532 podman[268016]: 2026-03-01 10:10:14.807542298 +0000 UTC m=+0.053183963 container remove 545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_khayyam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:10:14 np0005634532 systemd[1]: libpod-conmon-545bf70f314baffc695cc51cb806c03e1aec07e4c733ff35fcf4d6a8b6a8de33.scope: Deactivated successfully.
Mar  1 05:10:14 np0005634532 nova_compute[257049]: 2026-03-01 10:10:14.894 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:14 np0005634532 podman[268038]: 2026-03-01 10:10:14.931416653 +0000 UTC m=+0.044788206 container create 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:10:14 np0005634532 systemd[1]: Started libpod-conmon-49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12.scope.
Mar  1 05:10:14 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c78d8d1132e2b25e83cf1728522f15cde3d1472f79df28c92686536de62401b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c78d8d1132e2b25e83cf1728522f15cde3d1472f79df28c92686536de62401b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c78d8d1132e2b25e83cf1728522f15cde3d1472f79df28c92686536de62401b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:15 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c78d8d1132e2b25e83cf1728522f15cde3d1472f79df28c92686536de62401b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:14.91022112 +0000 UTC m=+0.023592773 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:15.02009715 +0000 UTC m=+0.133468793 container init 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:15.026756275 +0000 UTC m=+0.140127838 container start 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:15.030099437 +0000 UTC m=+0.143471080 container attach 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:10:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v833: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.8 KiB/s rd, 17 KiB/s wr, 4 op/s
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]: {
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:    "0": [
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:        {
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "devices": [
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "/dev/loop3"
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            ],
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "lv_name": "ceph_lv0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "lv_size": "21470642176",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "name": "ceph_lv0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "tags": {
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.cluster_name": "ceph",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.crush_device_class": "",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.encrypted": "0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.osd_id": "0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.type": "block",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.vdo": "0",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:                "ceph.with_tpm": "0"
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            },
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "type": "block",
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:            "vg_name": "ceph_vg0"
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:        }
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]:    ]
Mar  1 05:10:15 np0005634532 sharp_hawking[268055]: }
Mar  1 05:10:15 np0005634532 systemd[1]: libpod-49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12.scope: Deactivated successfully.
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:15.305259854 +0000 UTC m=+0.418631447 container died 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True)
Mar  1 05:10:15 np0005634532 systemd[1]: var-lib-containers-storage-overlay-4c78d8d1132e2b25e83cf1728522f15cde3d1472f79df28c92686536de62401b-merged.mount: Deactivated successfully.
Mar  1 05:10:15 np0005634532 podman[268038]: 2026-03-01 10:10:15.348768968 +0000 UTC m=+0.462140561 container remove 49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_hawking, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 05:10:15 np0005634532 systemd[1]: libpod-conmon-49bcecc268a3245af0e11e5fa8ff8be0269e405a0fb273220e1276aab69e6e12.scope: Deactivated successfully.
Mar  1 05:10:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:15 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_9] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb83800a3f0 fd 39 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:15.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[265802]: 01/03/2026 10:10:15 : epoch 69a4103d : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7fb818004050 fd 39 proxy ignored for local
Mar  1 05:10:15 np0005634532 kernel: ganesha.nfsd[266594]: segfault at 50 ip 00007fb8bae9032e sp 00007fb8417f9210 error 4 in libntirpc.so.5.8[7fb8bae75000+2c000] likely on CPU 0 (core 0, socket 0)
Mar  1 05:10:15 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 05:10:15 np0005634532 systemd[1]: Started Process Core Dump (PID 268140/UID 0).
Mar  1 05:10:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:15.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.878039993 +0000 UTC m=+0.041628228 container create e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:10:15 np0005634532 systemd[1]: Started libpod-conmon-e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3.scope.
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.861936246 +0000 UTC m=+0.025524501 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:15 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.97481973 +0000 UTC m=+0.138407985 container init e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.984217812 +0000 UTC m=+0.147806047 container start e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.987362479 +0000 UTC m=+0.150950714 container attach e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:10:15 np0005634532 festive_allen[268185]: 167 167
Mar  1 05:10:15 np0005634532 systemd[1]: libpod-e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3.scope: Deactivated successfully.
Mar  1 05:10:15 np0005634532 podman[268168]: 2026-03-01 10:10:15.992185028 +0000 UTC m=+0.155773263 container died e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:10:16 np0005634532 systemd[1]: var-lib-containers-storage-overlay-cd53aa33474909a4a597e1fe709f7e66ac92a9f9cba6f646a507a3ebe34a06ed-merged.mount: Deactivated successfully.
Mar  1 05:10:16 np0005634532 podman[268168]: 2026-03-01 10:10:16.026519645 +0000 UTC m=+0.190107910 container remove e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_allen, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:10:16 np0005634532 nova_compute[257049]: 2026-03-01 10:10:16.029 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:16 np0005634532 systemd[1]: libpod-conmon-e13ac169fe2c2985e9bba46606c5fefb2112ec581d7a9ac93efeae4d21df45b3.scope: Deactivated successfully.
Mar  1 05:10:16 np0005634532 podman[268209]: 2026-03-01 10:10:16.192832488 +0000 UTC m=+0.042544811 container create d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:10:16 np0005634532 systemd[1]: Started libpod-conmon-d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e.scope.
Mar  1 05:10:16 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ca6ba07bfbfacf2a8cb6b4189dc9b7a1be74b9302eff400c1aa3b31eda3ccd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ca6ba07bfbfacf2a8cb6b4189dc9b7a1be74b9302eff400c1aa3b31eda3ccd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:16 np0005634532 podman[268209]: 2026-03-01 10:10:16.173164962 +0000 UTC m=+0.022877335 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ca6ba07bfbfacf2a8cb6b4189dc9b7a1be74b9302eff400c1aa3b31eda3ccd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:16 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83ca6ba07bfbfacf2a8cb6b4189dc9b7a1be74b9302eff400c1aa3b31eda3ccd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:16 np0005634532 podman[268209]: 2026-03-01 10:10:16.304857461 +0000 UTC m=+0.154569824 container init d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:10:16 np0005634532 podman[268209]: 2026-03-01 10:10:16.316533759 +0000 UTC m=+0.166246082 container start d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:10:16 np0005634532 podman[268209]: 2026-03-01 10:10:16.320138148 +0000 UTC m=+0.169850511 container attach d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:10:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:16 np0005634532 lvm[268325]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:10:16 np0005634532 lvm[268325]: VG ceph_vg0 finished
Mar  1 05:10:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:17] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:17] "GET /metrics HTTP/1.1" 200 48474 "" "Prometheus/2.51.0"
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v834: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 1.3 KiB/s rd, 5.1 KiB/s wr, 2 op/s
Mar  1 05:10:17 np0005634532 vigilant_gauss[268227]: {}
Mar  1 05:10:17 np0005634532 systemd[1]: libpod-d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e.scope: Deactivated successfully.
Mar  1 05:10:17 np0005634532 systemd[1]: libpod-d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e.scope: Consumed 1.098s CPU time.
Mar  1 05:10:17 np0005634532 podman[268209]: 2026-03-01 10:10:17.131468539 +0000 UTC m=+0.981180902 container died d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:10:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:17.228Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-83ca6ba07bfbfacf2a8cb6b4189dc9b7a1be74b9302eff400c1aa3b31eda3ccd-merged.mount: Deactivated successfully.
Mar  1 05:10:17 np0005634532 podman[268209]: 2026-03-01 10:10:17.278726842 +0000 UTC m=+1.128439165 container remove d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigilant_gauss, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:10:17 np0005634532 systemd[1]: libpod-conmon-d17b7a78f73d8448e7582799420f6e31333fd4404f6f5accb6a55b333b4ce06e.scope: Deactivated successfully.
Mar  1 05:10:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:10:17
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['images', '.nfs', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', '.rgw.root', 'vms', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log']
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:10:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:10:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:10:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:17.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:17.777 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007596545956241453 of space, bias 1.0, pg target 0.22789637868724358 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:10:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:10:18 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:18 np0005634532 systemd-coredump[268149]: Process 265806 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 44:#012#0  0x00007fb8bae9032e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 05:10:18 np0005634532 systemd[1]: systemd-coredump@11-268140-0.service: Deactivated successfully.
Mar  1 05:10:18 np0005634532 podman[268373]: 2026-03-01 10:10:18.207376528 +0000 UTC m=+0.033704382 container died 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:10:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3681b728a3c95b4a20b00ebe7973f602cf922460fc11a689337dba4eb215a8c1-merged.mount: Deactivated successfully.
Mar  1 05:10:18 np0005634532 podman[268373]: 2026-03-01 10:10:18.239523991 +0000 UTC m=+0.065851785 container remove 05df45c7104ee91b334e9c433f2d06e3801a76197adf7c9418781f48079605fd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 05:10:18 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 05:10:18 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 05:10:18 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.175s CPU time.
Mar  1 05:10:18 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:18 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v835: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 3.7 KiB/s rd, 6.1 KiB/s wr, 2 op/s
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:10:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:10:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:19.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:19.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:19 np0005634532 nova_compute[257049]: 2026-03-01 10:10:19.933 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.120 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.120 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.138 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.197 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.198 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.204 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.205 257053 INFO nova.compute.claims [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.299 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:10:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4245515869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.742 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.747 257053 DEBUG nova.compute.provider_tree [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.771 257053 DEBUG nova.scheduler.client.report [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.798 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.600s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.799 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.857 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.858 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.883 257053 INFO nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Mar  1 05:10:20 np0005634532 nova_compute[257049]: 2026-03-01 10:10:20.912 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.007 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.008 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.008 257053 INFO nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Creating image(s)#033[00m
Mar  1 05:10:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v836: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 0 op/s
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.074 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.106 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.139 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.144 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.168 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.171 257053 DEBUG nova.policy [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '054b4e3fa290475c906614f7e45d128f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.228 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.228 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "d41046c43044bf8997bc5f9ade85627ba841861d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.229 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.229 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.255 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.258 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.555 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.297s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:21.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.636 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] resizing rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.754 257053 DEBUG nova.objects.instance [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'migration_context' on Instance uuid c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.764 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Successfully created port: b6ca0203-b551-4cae-b162-715da216fc4a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.769 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.770 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Ensure instance console log exists: /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.770 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.771 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:21 np0005634532 nova_compute[257049]: 2026-03-01 10:10:21.771 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:21.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v837: 353 pgs: 353 active+clean; 121 MiB data, 302 MiB used, 60 GiB / 60 GiB avail; 2.5 KiB/s rd, 1023 B/s wr, 0 op/s
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.479 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Successfully updated port: b6ca0203-b551-4cae-b162-715da216fc4a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.498 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.498 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquired lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.499 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.566 257053 DEBUG nova.compute.manager [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-changed-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.567 257053 DEBUG nova.compute.manager [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Refreshing instance network info cache due to event network-changed-b6ca0203-b551-4cae-b162-715da216fc4a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.567 257053 DEBUG oslo_concurrency.lockutils [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:10:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:23.583 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:23 np0005634532 nova_compute[257049]: 2026-03-01 10:10:23.654 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Mar  1 05:10:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101023 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 2 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:10:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:23.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:23.883 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:23.883 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:23.883 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.703 257053 DEBUG nova.network.neutron [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updating instance_info_cache with network_info: [{"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.729 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Releasing lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.730 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Instance network_info: |[{"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.731 257053 DEBUG oslo_concurrency.lockutils [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.732 257053 DEBUG nova.network.neutron [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Refreshing network info cache for port b6ca0203-b551-4cae-b162-715da216fc4a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.737 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Start _get_guest_xml network_info=[{"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'image_id': '07f64171-cfd1-4482-a545-07063cf7c3f2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.745 257053 WARNING nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.751 257053 DEBUG nova.virt.libvirt.host [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.752 257053 DEBUG nova.virt.libvirt.host [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.760 257053 DEBUG nova.virt.libvirt.host [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.760 257053 DEBUG nova.virt.libvirt.host [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.761 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.761 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-03-01T10:04:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='47cd4c38-4c43-414c-bd62-23cc1dc66486',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.762 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.763 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.763 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.764 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.764 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.764 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.765 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.765 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.765 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.766 257053 DEBUG nova.virt.hardware [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.771 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:24 np0005634532 nova_compute[257049]: 2026-03-01 10:10:24.935 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v838: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Mar  1 05:10:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:10:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1774548360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.292 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.322 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.326 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:25.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:10:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564829884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.736 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.740 257053 DEBUG nova.virt.libvirt.vif [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-107361223',display_name='tempest-TestNetworkBasicOps-server-107361223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-107361223',id=7,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNGnRv9f8Rpo1Z1vKL2qTmTKuztkPVelmJaPG0CWGJkNdT1keURNrHkBoaiVZ0iCwWk6E9iQSe5i/05ZctbClMeti2Rw/85SJiCemfIG6Atsx/t91JwSYKQU6uqmfeGRKQ==',key_name='tempest-TestNetworkBasicOps-2055693174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-aafzqwef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:10:20Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.740 257053 DEBUG nova.network.os_vif_util [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.742 257053 DEBUG nova.network.os_vif_util [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.745 257053 DEBUG nova.objects.instance [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.784 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] End _get_guest_xml xml=<domain type="kvm">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <uuid>c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c</uuid>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <name>instance-00000007</name>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <memory>131072</memory>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <vcpu>1</vcpu>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <metadata>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:name>tempest-TestNetworkBasicOps-server-107361223</nova:name>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:creationTime>2026-03-01 10:10:24</nova:creationTime>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:flavor name="m1.nano">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:memory>128</nova:memory>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:disk>1</nova:disk>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:swap>0</nova:swap>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:ephemeral>0</nova:ephemeral>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:vcpus>1</nova:vcpus>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </nova:flavor>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:owner>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:user uuid="054b4e3fa290475c906614f7e45d128f">tempest-TestNetworkBasicOps-1700707940-project-member</nova:user>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:project uuid="aa1916e2334f470ea8eeda213ef84cc5">tempest-TestNetworkBasicOps-1700707940</nova:project>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </nova:owner>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:root type="image" uuid="07f64171-cfd1-4482-a545-07063cf7c3f2"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <nova:ports>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <nova:port uuid="b6ca0203-b551-4cae-b162-715da216fc4a">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        </nova:port>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </nova:ports>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </nova:instance>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </metadata>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <sysinfo type="smbios">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <system>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="manufacturer">RDO</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="product">OpenStack Compute</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="serial">c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="uuid">c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <entry name="family">Virtual Machine</entry>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </system>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </sysinfo>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <os>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <type arch="x86_64" machine="q35">hvm</type>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <boot dev="hd"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <smbios mode="sysinfo"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <features>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <acpi/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <apic/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <vmcoreinfo/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </features>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <clock offset="utc">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <timer name="pit" tickpolicy="delay"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <timer name="rtc" tickpolicy="catchup"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <timer name="hpet" present="no"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </clock>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <cpu mode="host-model" match="exact">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <topology sockets="1" cores="1" threads="1"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </cpu>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  <devices>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <disk type="network" device="disk">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <target dev="vda" bus="virtio"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <disk type="network" device="cdrom">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <target dev="sda" bus="sata"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <interface type="ethernet">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <mac address="fa:16:3e:22:7a:92"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <driver name="vhost" rx_queue_size="512"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <mtu size="1442"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <target dev="tapb6ca0203-b5"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </interface>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <serial type="pty">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <log file="/var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/console.log" append="off"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </serial>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <video>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </video>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <input type="tablet" bus="usb"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <rng model="virtio">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <backend model="random">/dev/urandom</backend>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </rng>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <controller type="usb" index="0"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    <memballoon model="virtio">
Mar  1 05:10:25 np0005634532 nova_compute[257049]:      <stats period="10"/>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:    </memballoon>
Mar  1 05:10:25 np0005634532 nova_compute[257049]:  </devices>
Mar  1 05:10:25 np0005634532 nova_compute[257049]: </domain>
Mar  1 05:10:25 np0005634532 nova_compute[257049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.785 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Preparing to wait for external event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.785 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.786 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.786 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:25.786 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.788 257053 DEBUG nova.virt.libvirt.vif [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-107361223',display_name='tempest-TestNetworkBasicOps-server-107361223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-107361223',id=7,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNGnRv9f8Rpo1Z1vKL2qTmTKuztkPVelmJaPG0CWGJkNdT1keURNrHkBoaiVZ0iCwWk6E9iQSe5i/05ZctbClMeti2Rw/85SJiCemfIG6Atsx/t91JwSYKQU6uqmfeGRKQ==',key_name='tempest-TestNetworkBasicOps-2055693174',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-aafzqwef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:10:20Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.788 257053 DEBUG nova.network.os_vif_util [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.789 257053 DEBUG nova.network.os_vif_util [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.789 257053 DEBUG os_vif [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.791 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.791 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.792 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.797 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.797 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6ca0203-b5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.798 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6ca0203-b5, col_values=(('external_ids', {'iface-id': 'b6ca0203-b551-4cae-b162-715da216fc4a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:7a:92', 'vm-uuid': 'c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:25 np0005634532 NetworkManager[49996]: <info>  [1772359825.8011] manager: (tapb6ca0203-b5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.800 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.805 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.809 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.810 257053 INFO os_vif [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5')#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.856 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.856 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.857 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No VIF found with MAC fa:16:3e:22:7a:92, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.858 257053 INFO nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Using config drive#033[00m
Mar  1 05:10:25 np0005634532 nova_compute[257049]: 2026-03-01 10:10:25.899 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.468 257053 DEBUG nova.network.neutron [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updated VIF entry in instance network info cache for port b6ca0203-b551-4cae-b162-715da216fc4a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.469 257053 DEBUG nova.network.neutron [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updating instance_info_cache with network_info: [{"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.487 257053 DEBUG oslo_concurrency.lockutils [req-be4fbbb5-b2df-4577-88c0-ee2333c63a32 req-85629f2c-ebe1-4a66-91a0-9ed718b2da40 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.589 257053 INFO nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Creating config drive at /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.594 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbth_5iln execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.724 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmpbth_5iln" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.761 257053 DEBUG nova.storage.rbd_utils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:10:26 np0005634532 nova_compute[257049]: 2026-03-01 10:10:26.765 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:27] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v839: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Mar  1 05:10:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:27.229Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:10:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:27.230Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.461 257053 DEBUG oslo_concurrency.processutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.696s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.463 257053 INFO nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Deleting local config drive /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c/disk.config because it was imported into RBD.#033[00m
Mar  1 05:10:27 np0005634532 systemd[1]: Starting libvirt secret daemon...
Mar  1 05:10:27 np0005634532 systemd[1]: Started libvirt secret daemon.
Mar  1 05:10:27 np0005634532 kernel: tapb6ca0203-b5: entered promiscuous mode
Mar  1 05:10:27 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:27Z|00037|binding|INFO|Claiming lport b6ca0203-b551-4cae-b162-715da216fc4a for this chassis.
Mar  1 05:10:27 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:27Z|00038|binding|INFO|b6ca0203-b551-4cae-b162-715da216fc4a: Claiming fa:16:3e:22:7a:92 10.100.0.27
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.562 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.5646] manager: (tapb6ca0203-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.568 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.569 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.576 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:7a:92 10.100.0.27'], port_security=['fa:16:3e:22:7a:92 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3a7ebc37-7074-4152-ab9b-7f7a14d43ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e76e2a-a17a-46ce-8cc1-0042eb93d617, chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=b6ca0203-b551-4cae-b162-715da216fc4a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.578 167541 INFO neutron.agent.ovn.metadata.agent [-] Port b6ca0203-b551-4cae-b162-715da216fc4a in datapath 7e0ffeca-1584-4482-b69c-90e1af931e6d bound to our chassis#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.581 167541 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7e0ffeca-1584-4482-b69c-90e1af931e6d#033[00m
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.585 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:27Z|00039|binding|INFO|Setting lport b6ca0203-b551-4cae-b162-715da216fc4a ovn-installed in OVS
Mar  1 05:10:27 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:27Z|00040|binding|INFO|Setting lport b6ca0203-b551-4cae-b162-715da216fc4a up in Southbound
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.586 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:27.590 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:27 np0005634532 systemd-machined[221390]: New machine qemu-2-instance-00000007.
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.591 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[7625acd9-b6df-4955-82e4-7717d21fa868]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.592 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7e0ffeca-11 in ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.595 262878 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7e0ffeca-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.595 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[67f6d330-596a-457a-9e4f-1643e7649722]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.596 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[0743f944-0927-4fad-aac6-cea597f8a623]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.606 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[5688063b-7564-4ee7-84fa-ab8de45c516d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 systemd[1]: Started Virtual Machine qemu-2-instance-00000007.
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.620 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[215c7cf5-39dd-415d-b3c1-6a4fc1e62073]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 systemd-udevd[268774]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.6493] device (tapb6ca0203-b5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.648 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[fe3096bd-9d62-4e0f-8184-d9b025390d26]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.6501] device (tapb6ca0203-b5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.654 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[2d399375-2882-4933-b65c-e44bb96ee86f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.6551] manager: (tap7e0ffeca-10): new Veth device (/org/freedesktop/NetworkManager/Devices/33)
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.682 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[50d20dff-466e-4529-b55a-f804c44d76c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.685 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[83fc3fc7-1bcc-4840-8b4f-956d63bd47d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.7011] device (tap7e0ffeca-10): carrier: link connected
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.703 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[5340a893-c761-44be-af4e-4279cfdb984d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.718 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[8cea0b2f-e814-422b-9ac7-13d2fef92646]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e0ffeca-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418429, 'reachable_time': 35108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 268801, 'error': None, 'target': 'ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.732 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[81548f7b-9d5c-4be7-8672-38414e6cf04f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea5:b8ca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 418429, 'tstamp': 418429}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 268802, 'error': None, 'target': 'ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.746 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[1aade3ec-47c8-45a0-9a46-135a2c225fd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7e0ffeca-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a5:b8:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 16], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418429, 'reachable_time': 35108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 268804, 'error': None, 'target': 'ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.776 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec31414-f6bf-4aa9-bd5f-5da5c207b93f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.832 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[ded6ede1-ae10-452d-b5f4-864675978569]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.834 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e0ffeca-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.834 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.835 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7e0ffeca-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.837 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 NetworkManager[49996]: <info>  [1772359827.8379] manager: (tap7e0ffeca-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Mar  1 05:10:27 np0005634532 kernel: tap7e0ffeca-10: entered promiscuous mode
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.843 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.847 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7e0ffeca-10, col_values=(('external_ids', {'iface-id': '3f5083d2-a61a-4c37-b498-88e9fbab50e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.848 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:27Z|00041|binding|INFO|Releasing lport 3f5083d2-a61a-4c37-b498-88e9fbab50e2 from this chassis (sb_readonly=0)
Mar  1 05:10:27 np0005634532 nova_compute[257049]: 2026-03-01 10:10:27.858 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.861 167541 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7e0ffeca-1584-4482-b69c-90e1af931e6d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7e0ffeca-1584-4482-b69c-90e1af931e6d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.863 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[952f54e7-531b-44a2-9c9c-5792bea1acd7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.863 167541 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: global
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    log         /dev/log local0 debug
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    log-tag     haproxy-metadata-proxy-7e0ffeca-1584-4482-b69c-90e1af931e6d
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    user        root
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    group       root
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    maxconn     1024
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    pidfile     /var/lib/neutron/external/pids/7e0ffeca-1584-4482-b69c-90e1af931e6d.pid.haproxy
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    daemon
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: defaults
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    log global
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    mode http
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    option httplog
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    option dontlognull
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    option http-server-close
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    option forwardfor
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    retries                 3
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    timeout http-request    30s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    timeout connect         30s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    timeout client          32s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    timeout server          32s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    timeout http-keep-alive 30s
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: listen listener
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    bind 169.254.169.254:80
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    server metadata /var/lib/neutron/metadata_proxy
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]:    http-request add-header X-OVN-Network-ID 7e0ffeca-1584-4482-b69c-90e1af931e6d
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Mar  1 05:10:27 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:27.864 167541 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'env', 'PROCESS_TAG=haproxy-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7e0ffeca-1584-4482-b69c-90e1af931e6d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Mar  1 05:10:28 np0005634532 podman[268837]: 2026-03-01 10:10:28.20503859 +0000 UTC m=+0.051778239 container create 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:10:28 np0005634532 systemd[1]: Started libpod-conmon-723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275.scope.
Mar  1 05:10:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:10:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/467509a7bc617377231adcc69c908aaf6958ddb69ea5b53a0eec845118e1d9fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:28 np0005634532 podman[268837]: 2026-03-01 10:10:28.174972168 +0000 UTC m=+0.021711827 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 05:10:28 np0005634532 podman[268837]: 2026-03-01 10:10:28.278811349 +0000 UTC m=+0.125550988 container init 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.license=GPLv2)
Mar  1 05:10:28 np0005634532 podman[268837]: 2026-03-01 10:10:28.282363277 +0000 UTC m=+0.129102886 container start 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0)
Mar  1 05:10:28 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [NOTICE]   (268876) : New worker (268888) forked
Mar  1 05:10:28 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [NOTICE]   (268876) : Loading success.
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.421 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359828.4200833, c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.421 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] VM Started (Lifecycle Event)#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.443 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.446 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359828.4207788, c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.446 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] VM Paused (Lifecycle Event)#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.465 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.468 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.488 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:10:28 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 12.
Mar  1 05:10:28 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:10:28 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.175s CPU time.
Mar  1 05:10:28 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.646 257053 DEBUG nova.compute.manager [req-f5ba9471-fa82-4ed9-a787-489b435d8c97 req-1fc31326-4bdb-4bf6-ab3f-5b66bc54407c 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.647 257053 DEBUG oslo_concurrency.lockutils [req-f5ba9471-fa82-4ed9-a787-489b435d8c97 req-1fc31326-4bdb-4bf6-ab3f-5b66bc54407c 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.648 257053 DEBUG oslo_concurrency.lockutils [req-f5ba9471-fa82-4ed9-a787-489b435d8c97 req-1fc31326-4bdb-4bf6-ab3f-5b66bc54407c 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.648 257053 DEBUG oslo_concurrency.lockutils [req-f5ba9471-fa82-4ed9-a787-489b435d8c97 req-1fc31326-4bdb-4bf6-ab3f-5b66bc54407c 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.648 257053 DEBUG nova.compute.manager [req-f5ba9471-fa82-4ed9-a787-489b435d8c97 req-1fc31326-4bdb-4bf6-ab3f-5b66bc54407c 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Processing event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.649 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.652 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359828.6524825, c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.653 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] VM Resumed (Lifecycle Event)#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.655 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.657 257053 INFO nova.virt.libvirt.driver [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Instance spawned successfully.#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.658 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.679 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.687 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.690 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.690 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.691 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.691 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.692 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.692 257053 DEBUG nova.virt.libvirt.driver [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.723 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:10:28 np0005634532 podman[268956]: 2026-03-01 10:10:28.745935622 +0000 UTC m=+0.038708496 container create 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.754 257053 INFO nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Took 7.75 seconds to spawn the instance on the hypervisor.#033[00m
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.754 257053 DEBUG nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:10:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986eaac5c6a689e872d237e9473f17c90630a58def21f55fa6bac967b365e478/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986eaac5c6a689e872d237e9473f17c90630a58def21f55fa6bac967b365e478/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986eaac5c6a689e872d237e9473f17c90630a58def21f55fa6bac967b365e478/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/986eaac5c6a689e872d237e9473f17c90630a58def21f55fa6bac967b365e478/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.810 257053 INFO nova.compute.manager [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Took 8.63 seconds to build instance.#033[00m
Mar  1 05:10:28 np0005634532 podman[268956]: 2026-03-01 10:10:28.815373994 +0000 UTC m=+0.108146888 container init 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 05:10:28 np0005634532 podman[268956]: 2026-03-01 10:10:28.8200581 +0000 UTC m=+0.112830974 container start 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:10:28 np0005634532 bash[268956]: 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f
Mar  1 05:10:28 np0005634532 podman[268956]: 2026-03-01 10:10:28.730422419 +0000 UTC m=+0.023195303 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:10:28 np0005634532 nova_compute[257049]: 2026-03-01 10:10:28.825 257053 DEBUG oslo_concurrency.lockutils [None req-336fe8d9-86f8-4543-8cad-0cd5ffcc2e1e 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 05:10:28 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 05:10:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:28 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:10:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v840: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Mar  1 05:10:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:29.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:29.790 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:29 np0005634532 nova_compute[257049]: 2026-03-01 10:10:29.938 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.799 257053 DEBUG nova.compute.manager [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.801 257053 DEBUG oslo_concurrency.lockutils [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.801 257053 DEBUG oslo_concurrency.lockutils [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.801 257053 DEBUG oslo_concurrency.lockutils [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.802 257053 DEBUG nova.compute.manager [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] No waiting events found dispatching network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.802 257053 WARNING nova.compute.manager [req-a9cac984-9f89-43e2-ad58-b5836e758168 req-0a8e3cb5-8b23-49d4-8f9e-bda9dfa123ab 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received unexpected event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a for instance with vm_state active and task_state None.#033[00m
Mar  1 05:10:30 np0005634532 nova_compute[257049]: 2026-03-01 10:10:30.803 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v841: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Mar  1 05:10:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101031 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:10:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:31.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:31.793 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:10:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:10:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v842: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Mar  1 05:10:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:33.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:33.796 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:34 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:10:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:34 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:10:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:34 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:10:34 np0005634532 nova_compute[257049]: 2026-03-01 10:10:34.975 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v843: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Mar  1 05:10:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:35.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:35.798 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:35 np0005634532 nova_compute[257049]: 2026-03-01 10:10:35.804 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:36 np0005634532 nova_compute[257049]: 2026-03-01 10:10:36.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:36 np0005634532 nova_compute[257049]: 2026-03-01 10:10:36.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Mar  1 05:10:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:37] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:37] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v844: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 75 op/s
Mar  1 05:10:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:37.231Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:10:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:37.231Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:37.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:37.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:37 np0005634532 nova_compute[257049]: 2026-03-01 10:10:37.990 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:37 np0005634532 nova_compute[257049]: 2026-03-01 10:10:37.991 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:37 np0005634532 nova_compute[257049]: 2026-03-01 10:10:37.991 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:10:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:38 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:10:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:38 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:10:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:38 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:10:38 np0005634532 podman[269050]: 2026-03-01 10:10:38.404641034 +0000 UTC m=+0.097927426 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 05:10:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v845: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 17 KiB/s wr, 77 op/s
Mar  1 05:10:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:39.607 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101039 (4) : Server backend/nfs.cephfs.0 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:10:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [NOTICE] 059/101039 (4) : haproxy version is 2.3.17-d1c9119
Mar  1 05:10:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [NOTICE] 059/101039 (4) : path to executable is /usr/local/sbin/haproxy
Mar  1 05:10:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [ALERT] 059/101039 (4) : backend 'backend' has no server available!
Mar  1 05:10:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:39.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:39 np0005634532 nova_compute[257049]: 2026-03-01 10:10:39.977 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:40 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:40Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:7a:92 10.100.0.27
Mar  1 05:10:40 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:40Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:7a:92 10.100.0.27
Mar  1 05:10:40 np0005634532 nova_compute[257049]: 2026-03-01 10:10:40.806 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:40 np0005634532 nova_compute[257049]: 2026-03-01 10:10:40.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:40 np0005634532 nova_compute[257049]: 2026-03-01 10:10:40.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v846: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Mar  1 05:10:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:41.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:41.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:41 np0005634532 nova_compute[257049]: 2026-03-01 10:10:41.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:41 np0005634532 nova_compute[257049]: 2026-03-01 10:10:41.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:42 np0005634532 nova_compute[257049]: 2026-03-01 10:10:42.984 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:42 np0005634532 nova_compute[257049]: 2026-03-01 10:10:42.985 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:42 np0005634532 nova_compute[257049]: 2026-03-01 10:10:42.985 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.018 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.018 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.019 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.019 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.019 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v847: 353 pgs: 353 active+clean; 167 MiB data, 323 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 KiB/s wr, 67 op/s
Mar  1 05:10:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:10:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763637527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.473 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.535 257053 DEBUG nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.535 257053 DEBUG nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Mar  1 05:10:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:43.612 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.684 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.685 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4391MB free_disk=59.921722412109375GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.686 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.686 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.782 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Instance c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.782 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.782 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:10:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:43.808 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:43 np0005634532 nova_compute[257049]: 2026-03-01 10:10:43.859 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] main :NFS STARTUP :WARN :No export entries found in configuration file !!!
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:24): Unknown block (RADOS_URLS)
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] config_errs_to_log :CONFIG :WARN :Config File (/etc/ganesha/ganesha.conf:29): Unknown block (RGW)
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :CAP_SYS_RESOURCE was successfully removed for proper quota management in FSAL
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] lower_my_caps :NFS STARTUP :EVENT :currently set capabilities are: cap_chown,cap_dac_override,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_net_bind_service,cap_sys_chroot,cap_setfcap=ep
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_pkginit :DBUS :CRIT :dbus_bus_get failed (Failed to connect to socket /run/dbus/system_bus_socket: No such file or directory)
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Init_svc :DISP :CRIT :Cannot acquire credentials for principal nfs
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gsh_dbus_register_path :DBUS :CRIT :dbus_connection_register_object_path called with no DBUS connection
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Init_admin_thread :NFS CB :EVENT :Admin thread initialized
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :EVENT :Callback creds directory (/var/run/ganesha) already exists
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] find_keytab_entry :NFS CB :WARN :Configuration file does not specify default realm while getting default realm name
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] gssd_refresh_krb5_machine_credential :NFS CB :CRIT :ERROR: gssd_refresh_krb5_machine_credential: no usable keytab entry found in keytab /etc/krb5.keytab for connection with host localhost
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_rpc_cb_init_ccache :NFS STARTUP :WARN :gssd_refresh_krb5_machine_credential failed (-1765328160:0)
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :Starting delayed executor.
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :gsh_dbusthread was started successfully
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :CRIT :DBUS not initialized, service thread exiting
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[dbus] gsh_dbus_thread :DBUS :EVENT :shutdown
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :admin thread was started successfully
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :reaper thread was started successfully
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_Start_threads :THREAD :EVENT :General fridge was started successfully
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :             NFS SERVER INITIALIZED
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[main] nfs_start :NFS STARTUP :EVENT :-------------------------------------------------
Mar  1 05:10:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:44 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:10:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:10:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272524691' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.284 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.291 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.309 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.335 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.335 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:44 np0005634532 podman[269136]: 2026-03-01 10:10:44.386202046 +0000 UTC m=+0.079208414 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0)
Mar  1 05:10:44 np0005634532 nova_compute[257049]: 2026-03-01 10:10:44.979 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v848: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 126 op/s
Mar  1 05:10:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8000df0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:45.614 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00012c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:45 np0005634532 nova_compute[257049]: 2026-03-01 10:10:45.809 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:45.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8000e00 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:45 np0005634532 nova_compute[257049]: 2026-03-01 10:10:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:45 np0005634532 nova_compute[257049]: 2026-03-01 10:10:45.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:10:45 np0005634532 nova_compute[257049]: 2026-03-01 10:10:45.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:10:46 np0005634532 nova_compute[257049]: 2026-03-01 10:10:46.456 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:10:46 np0005634532 nova_compute[257049]: 2026-03-01 10:10:46.457 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquired lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:10:46 np0005634532 nova_compute[257049]: 2026-03-01 10:10:46.457 257053 DEBUG nova.network.neutron [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Mar  1 05:10:46 np0005634532 nova_compute[257049]: 2026-03-01 10:10:46.457 257053 DEBUG nova.objects.instance [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:10:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:47] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:47] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v849: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 211 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[reaper] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[reaper] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:47.232Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:10:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:47.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:10:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101047 (4) : Server backend/nfs.cephfs.2 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:47.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00012c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.040 257053 DEBUG nova.network.neutron [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updating instance_info_cache with network_info: [{"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.061 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Releasing lock "refresh_cache-c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.061 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.062 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.063 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Mar  1 05:10:48 np0005634532 nova_compute[257049]: 2026-03-01 10:10:48.080 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Mar  1 05:10:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v850: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 212 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:10:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8001920 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:49.621 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:49.816 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8002010 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:50 np0005634532 nova_compute[257049]: 2026-03-01 10:10:50.018 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:50 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:50 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[reaper] nfs_lift_grace_locked :STATE :EVENT :NFS Server Now NOT IN GRACE
Mar  1 05:10:50 np0005634532 nova_compute[257049]: 2026-03-01 10:10:50.812 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:51 np0005634532 nova_compute[257049]: 2026-03-01 10:10:51.075 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:10:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v851: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Mar  1 05:10:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:51.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:51.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v852: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 208 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.423 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.424 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.424 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.425 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.425 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.427 257053 INFO nova.compute.manager [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Terminating instance#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.428 257053 DEBUG nova.compute.manager [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Mar  1 05:10:53 np0005634532 kernel: tapb6ca0203-b5 (unregistering): left promiscuous mode
Mar  1 05:10:53 np0005634532 NetworkManager[49996]: <info>  [1772359853.4911] device (tapb6ca0203-b5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Mar  1 05:10:53 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:53Z|00042|binding|INFO|Releasing lport b6ca0203-b551-4cae-b162-715da216fc4a from this chassis (sb_readonly=0)
Mar  1 05:10:53 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:53Z|00043|binding|INFO|Setting lport b6ca0203-b551-4cae-b162-715da216fc4a down in Southbound
Mar  1 05:10:53 np0005634532 ovn_controller[157082]: 2026-03-01T10:10:53Z|00044|binding|INFO|Removing iface tapb6ca0203-b5 ovn-installed in OVS
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.495 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f80021b0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101053 (4) : Server backend/nfs.cephfs.1 is UP, reason: Layer4 check passed, check duration: 0ms. 2 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.504 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:7a:92 10.100.0.27'], port_security=['fa:16:3e:22:7a:92 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3a7ebc37-7074-4152-ab9b-7f7a14d43ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=05e76e2a-a17a-46ce-8cc1-0042eb93d617, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=b6ca0203-b551-4cae-b162-715da216fc4a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f611def4670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.506 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.507 167541 INFO neutron.agent.ovn.metadata.agent [-] Port b6ca0203-b551-4cae-b162-715da216fc4a in datapath 7e0ffeca-1584-4482-b69c-90e1af931e6d unbound from our chassis#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.509 167541 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7e0ffeca-1584-4482-b69c-90e1af931e6d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.510 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[7d2afecf-a6f5-47d6-87fc-a8b83e28fc10]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.511 167541 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d namespace which is not needed anymore#033[00m
Mar  1 05:10:53 np0005634532 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Deactivated successfully.
Mar  1 05:10:53 np0005634532 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000007.scope: Consumed 12.459s CPU time.
Mar  1 05:10:53 np0005634532 systemd-machined[221390]: Machine qemu-2-instance-00000007 terminated.
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [NOTICE]   (268876) : haproxy version is 2.8.14-c23fe91
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [NOTICE]   (268876) : path to executable is /usr/sbin/haproxy
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [WARNING]  (268876) : Exiting Master process...
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [WARNING]  (268876) : Exiting Master process...
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [ALERT]    (268876) : Current worker (268888) exited with code 143 (Terminated)
Mar  1 05:10:53 np0005634532 neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d[268854]: [WARNING]  (268876) : All workers exited. Exiting... (0)
Mar  1 05:10:53 np0005634532 systemd[1]: libpod-723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275.scope: Deactivated successfully.
Mar  1 05:10:53 np0005634532 podman[269194]: 2026-03-01 10:10:53.626917118 +0000 UTC m=+0.040385077 container died 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:10:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:53.627 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:53 np0005634532 NetworkManager[49996]: <info>  [1772359853.6462] manager: (tapb6ca0203-b5): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.652 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.658 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275-userdata-shm.mount: Deactivated successfully.
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.666 257053 INFO nova.virt.libvirt.driver [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Instance destroyed successfully.#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.667 257053 DEBUG nova.objects.instance [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'resources' on Instance uuid c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:10:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay-467509a7bc617377231adcc69c908aaf6958ddb69ea5b53a0eec845118e1d9fa-merged.mount: Deactivated successfully.
Mar  1 05:10:53 np0005634532 podman[269194]: 2026-03-01 10:10:53.684900788 +0000 UTC m=+0.098368707 container cleanup 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.687 257053 DEBUG nova.virt.libvirt.vif [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-03-01T10:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-107361223',display_name='tempest-TestNetworkBasicOps-server-107361223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-107361223',id=7,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNGnRv9f8Rpo1Z1vKL2qTmTKuztkPVelmJaPG0CWGJkNdT1keURNrHkBoaiVZ0iCwWk6E9iQSe5i/05ZctbClMeti2Rw/85SJiCemfIG6Atsx/t91JwSYKQU6uqmfeGRKQ==',key_name='tempest-TestNetworkBasicOps-2055693174',keypairs=<?>,launch_index=0,launched_at=2026-03-01T10:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-aafzqwef',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-03-01T10:10:28Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.688 257053 DEBUG nova.network.os_vif_util [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "b6ca0203-b551-4cae-b162-715da216fc4a", "address": "fa:16:3e:22:7a:92", "network": {"id": "7e0ffeca-1584-4482-b69c-90e1af931e6d", "bridge": "br-int", "label": "tempest-network-smoke--630207128", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6ca0203-b5", "ovs_interfaceid": "b6ca0203-b551-4cae-b162-715da216fc4a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.688 257053 DEBUG nova.network.os_vif_util [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.689 257053 DEBUG os_vif [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.690 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 systemd[1]: libpod-conmon-723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275.scope: Deactivated successfully.
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.690 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6ca0203-b5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.692 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.694 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.698 257053 INFO os_vif [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:7a:92,bridge_name='br-int',has_traffic_filtering=True,id=b6ca0203-b551-4cae-b162-715da216fc4a,network=Network(7e0ffeca-1584-4482-b69c-90e1af931e6d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6ca0203-b5')#033[00m
Mar  1 05:10:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8002240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:53 np0005634532 podman[269232]: 2026-03-01 10:10:53.755815747 +0000 UTC m=+0.049107302 container remove 723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.760 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[263f41ff-79ed-4d85-af96-e8dc6fd9258b]: (4, ('Sun Mar  1 10:10:53 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d (723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275)\n723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275\nSun Mar  1 10:10:53 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d (723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275)\n723330e469a8e32f84f1fc3b816b12c1260155f03c3cf2471b9e72a78b0ed275\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.762 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9f5b1e-1634-4892-a717-5bd5fdb741e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.763 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7e0ffeca-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:53 np0005634532 kernel: tap7e0ffeca-10: left promiscuous mode
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.765 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 nova_compute[257049]: 2026-03-01 10:10:53.770 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.773 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f390c2a2-a114-4744-b93e-d0cc669dc2cd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.788 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f7afd3cf-fced-44e8-a4ee-6a8c1d949019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.789 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[187d33f8-c1be-4775-a3ec-37983b079946]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.800 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[c78af5d5-ee54-4876-b576-5b9011ff7e3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 418423, 'reachable_time': 37823, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 269265, 'error': None, 'target': 'ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.802 167914 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7e0ffeca-1584-4482-b69c-90e1af931e6d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Mar  1 05:10:53 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:53.802 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[ef7645d0-00a8-4eae-ac75-5cbe1573d485]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:10:53 np0005634532 systemd[1]: run-netns-ovnmeta\x2d7e0ffeca\x2d1584\x2d4482\x2db69c\x2d90e1af931e6d.mount: Deactivated successfully.
Mar  1 05:10:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:53.819 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.109 257053 INFO nova.virt.libvirt.driver [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Deleting instance files /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_del#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.110 257053 INFO nova.virt.libvirt.driver [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Deletion of /var/lib/nova/instances/c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c_del complete#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.181 257053 INFO nova.compute.manager [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.181 257053 DEBUG oslo.service.loopingcall [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.181 257053 DEBUG nova.compute.manager [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.181 257053 DEBUG nova.network.neutron [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.878 257053 DEBUG nova.compute.manager [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-unplugged-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.878 257053 DEBUG oslo_concurrency.lockutils [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.878 257053 DEBUG oslo_concurrency.lockutils [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.878 257053 DEBUG oslo_concurrency.lockutils [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.879 257053 DEBUG nova.compute.manager [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] No waiting events found dispatching network-vif-unplugged-b6ca0203-b551-4cae-b162-715da216fc4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.879 257053 DEBUG nova.compute.manager [req-f1810eed-4e59-4ce3-a3a6-57a5b1246da9 req-d5bc2c22-e6d2-4a76-9a53-45c0f95938a0 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-unplugged-b6ca0203-b551-4cae-b162-715da216fc4a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Mar  1 05:10:54 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:54.978 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:10:54 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:54.978 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:10:54 np0005634532 nova_compute[257049]: 2026-03-01 10:10:54.979 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.021 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v853: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 213 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Mar  1 05:10:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:10:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 5768 writes, 25K keys, 5768 commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.03 MB/s#012Cumulative WAL: 5768 writes, 5768 syncs, 1.00 writes per sync, written: 0.05 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1574 writes, 6936 keys, 1574 commit groups, 1.0 writes per commit group, ingest: 11.26 MB, 0.02 MB/s#012Interval WAL: 1574 writes, 1574 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0    121.2      0.34              0.09        14    0.024       0      0       0.0       0.0#012  L6      1/0   12.14 MB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   4.2    199.1    170.3      0.99              0.34        13    0.076     67K   6898       0.0       0.0#012 Sum      1/0   12.14 MB   0.0      0.2     0.0      0.2       0.2      0.1       0.0   5.2    148.7    157.9      1.33              0.43        27    0.049     67K   6898       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   8.4    214.5    215.8      0.41              0.17        12    0.034     34K   3080       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.2     0.0      0.2       0.2      0.0       0.0   0.0    199.1    170.3      0.99              0.34        13    0.076     67K   6898       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0    122.2      0.33              0.09        13    0.026       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.040, interval 0.010#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.20 GB write, 0.12 MB/s write, 0.19 GB read, 0.11 MB/s read, 1.3 seconds#012Interval compaction: 0.09 GB write, 0.15 MB/s write, 0.09 GB read, 0.15 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d94b81350#2 capacity: 304.00 MB usage: 14.48 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000139 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(812,13.94 MB,4.58574%) FilterBlock(28,199.55 KB,0.064102%) IndexBlock(28,353.61 KB,0.113593%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.154 257053 DEBUG nova.network.neutron [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.170 257053 INFO nova.compute.manager [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Took 0.99 seconds to deallocate network for instance.#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.220 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.220 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.225 257053 DEBUG nova.compute.manager [req-a9980470-8c6a-4e02-9216-d21ff927eaf3 req-bf122934-0860-426f-ae9d-ce272f00dad7 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-deleted-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.275 257053 DEBUG oslo_concurrency.processutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:10:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d00016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:55.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:10:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/875888843' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:10:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8002350 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.738 257053 DEBUG oslo_concurrency.processutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.743 257053 DEBUG nova.compute.provider_tree [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.761 257053 DEBUG nova.scheduler.client.report [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.786 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.809 257053 INFO nova.scheduler.client.report [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Deleted allocations for instance c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c#033[00m
Mar  1 05:10:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:55.821 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:55 np0005634532 nova_compute[257049]: 2026-03-01 10:10:55.858 257053 DEBUG oslo_concurrency.lockutils [None req-fbcaa3b1-0831-4f7f-9b5b-bcff4120e568 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.434s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8002240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.960 257053 DEBUG nova.compute.manager [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.960 257053 DEBUG oslo_concurrency.lockutils [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.961 257053 DEBUG oslo_concurrency.lockutils [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.961 257053 DEBUG oslo_concurrency.lockutils [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.961 257053 DEBUG nova.compute.manager [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] No waiting events found dispatching network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:10:56 np0005634532 nova_compute[257049]: 2026-03-01 10:10:56.962 257053 WARNING nova.compute.manager [req-5c930125-22db-4d0e-aa17-2a6a7f1b07cf req-8ab9b7fa-c90e-4eca-afdc-e282833869fe 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Received unexpected event network-vif-plugged-b6ca0203-b551-4cae-b162-715da216fc4a for instance with vm_state deleted and task_state None.#033[00m
Mar  1 05:10:56 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:10:56.981 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:10:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:57] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:10:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:10:57] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:10:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v854: 353 pgs: 353 active+clean; 200 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 16 KiB/s wr, 8 op/s
Mar  1 05:10:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:10:57.233Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:10:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:57.631 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:10:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:57.824 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:10:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:10:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166792547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:10:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:10:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4166792547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:10:58 np0005634532 nova_compute[257049]: 2026-03-01 10:10:58.679 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:58 np0005634532 nova_compute[257049]: 2026-03-01 10:10:58.693 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:10:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v855: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 27 KiB/s wr, 31 op/s
Mar  1 05:10:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8002240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:10:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:10:59.635 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:10:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:10:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:10:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:10:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:10:59.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:10:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:10:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:00 np0005634532 nova_compute[257049]: 2026-03-01 10:11:00.023 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v856: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Mar  1 05:11:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f8009990 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:01.637 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:01.828 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00023e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:11:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:11:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v857: 353 pgs: 353 active+clean; 121 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 13 KiB/s wr, 29 op/s
Mar  1 05:11:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:03.639 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:03 np0005634532 nova_compute[257049]: 2026-03-01 10:11:03.696 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:03.830 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:05 np0005634532 nova_compute[257049]: 2026-03-01 10:11:05.025 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v858: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 14 KiB/s wr, 57 op/s
Mar  1 05:11:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:05.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:05.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:11:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:11:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v859: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 50 op/s
Mar  1 05:11:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:07.235Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:11:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:07.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:07.834 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:08 np0005634532 nova_compute[257049]: 2026-03-01 10:11:08.664 257053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772359853.6636305, c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:11:08 np0005634532 nova_compute[257049]: 2026-03-01 10:11:08.665 257053 INFO nova.compute.manager [-] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] VM Stopped (Lifecycle Event)#033[00m
Mar  1 05:11:08 np0005634532 nova_compute[257049]: 2026-03-01 10:11:08.695 257053 DEBUG nova.compute.manager [None req-aa3accbd-ecf0-425a-9f23-04c01f5311c4 - - - - - -] [instance: c0990d1e-9a20-4bc5-8ee5-7f06cc9e139c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:08 np0005634532 nova_compute[257049]: 2026-03-01 10:11:08.698 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v860: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 12 KiB/s wr, 50 op/s
Mar  1 05:11:09 np0005634532 podman[269330]: 2026-03-01 10:11:09.36442907 +0000 UTC m=+0.058217197 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:11:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:09.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:09.837 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:10 np0005634532 nova_compute[257049]: 2026-03-01 10:11:10.027 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v861: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Mar  1 05:11:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:11.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:11.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v862: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Mar  1 05:11:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:13 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:13.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:13 np0005634532 nova_compute[257049]: 2026-03-01 10:11:13.701 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:13 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:13.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:13 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.030 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v863: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.166 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.166 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.183 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.249 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.250 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.255 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.256 257053 INFO nova.compute.claims [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.338 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:15 np0005634532 podman[269362]: 2026-03-01 10:11:15.372967206 +0000 UTC m=+0.063000635 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:11:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:15 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:11:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:15.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:11:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:15 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:11:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1859268929' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.773 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.778 257053 DEBUG nova.compute.provider_tree [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.796 257053 DEBUG nova.scheduler.client.report [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.815 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.566s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.816 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Mar  1 05:11:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:15.840 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.859 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.859 257053 DEBUG nova.network.neutron [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.876 257053 INFO nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.893 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Mar  1 05:11:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:15 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_10] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f00038d0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.981 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.982 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Mar  1 05:11:15 np0005634532 nova_compute[257049]: 2026-03-01 10:11:15.983 257053 INFO nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Creating image(s)#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.007 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.032 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.058 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.061 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.125 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.126 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "d41046c43044bf8997bc5f9ade85627ba841861d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.126 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.126 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.148 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.152 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.454 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.528 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] resizing rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.637 257053 DEBUG nova.objects.instance [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'migration_context' on Instance uuid f4629c49-d4bd-45fc-8ff5-bf640dc7426b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.651 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.651 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Ensure instance console log exists: /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.652 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.652 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.652 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:16 np0005634532 nova_compute[257049]: 2026-03-01 10:11:16.669 257053 DEBUG nova.policy [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '054b4e3fa290475c906614f7e45d128f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Mar  1 05:11:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v864: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 05:11:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:17.236Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:11:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:17 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_4] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:11:17
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.nfs', '.rgw.root', 'images', 'backups', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'vms']
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:11:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:11:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:17.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:17 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:17.843 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:17 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:11:17 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.251 257053 DEBUG nova.network.neutron [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Successfully updated port: 50a9155a-611b-4578-bf54-f7b987efbf4d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.266 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.266 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquired lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.266 257053 DEBUG nova.network.neutron [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.349 257053 DEBUG nova.compute.manager [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-changed-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.349 257053 DEBUG nova.compute.manager [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Refreshing instance network info cache due to event network-changed-50a9155a-611b-4578-bf54-f7b987efbf4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.349 257053 DEBUG oslo_concurrency.lockutils [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.408 257053 DEBUG nova.network.neutron [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Mar  1 05:11:18 np0005634532 nova_compute[257049]: 2026-03-01 10:11:18.703 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v865: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:19 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:11:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.499785029 +0000 UTC m=+0.033986679 container create c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:11:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:19 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4000b60 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:19 np0005634532 systemd[1]: Started libpod-conmon-c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d.scope.
Mar  1 05:11:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.559850011 +0000 UTC m=+0.094051661 container init c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.564643829 +0000 UTC m=+0.098845459 container start c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:11:19 np0005634532 practical_bohr[269792]: 167 167
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.56953831 +0000 UTC m=+0.103739960 container attach c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:11:19 np0005634532 systemd[1]: libpod-c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d.scope: Deactivated successfully.
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.569778255 +0000 UTC m=+0.103979885 container died c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.482739509 +0000 UTC m=+0.016941169 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b34620280e54fa86cdd28f3ae023ac7fee803710053dd05f7a30727a9fabfa9a-merged.mount: Deactivated successfully.
Mar  1 05:11:19 np0005634532 podman[269776]: 2026-03-01 10:11:19.597748885 +0000 UTC m=+0.131950505 container remove c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=practical_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:11:19 np0005634532 systemd[1]: libpod-conmon-c9965c5282a9f7b07717944fc07cfd186a5d3a2842f54a9bb1ac9cb787d89d0d.scope: Deactivated successfully.
Mar  1 05:11:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:19.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:19 np0005634532 podman[269816]: 2026-03-01 10:11:19.709353468 +0000 UTC m=+0.038932301 container create d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:11:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:19 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.745 257053 DEBUG nova.network.neutron [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updating instance_info_cache with network_info: [{"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:11:19 np0005634532 systemd[1]: Started libpod-conmon-d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321.scope.
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.769 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Releasing lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.770 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Instance network_info: |[{"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.770 257053 DEBUG oslo_concurrency.lockutils [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.771 257053 DEBUG nova.network.neutron [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Refreshing network info cache for port 50a9155a-611b-4578-bf54-f7b987efbf4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.776 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Start _get_guest_xml network_info=[{"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'image_id': '07f64171-cfd1-4482-a545-07063cf7c3f2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Mar  1 05:11:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.781 257053 WARNING nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:11:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:19 np0005634532 podman[269816]: 2026-03-01 10:11:19.692185705 +0000 UTC m=+0.021764578 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.787 257053 DEBUG nova.virt.libvirt.host [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.789 257053 DEBUG nova.virt.libvirt.host [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.799 257053 DEBUG nova.virt.libvirt.host [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.800 257053 DEBUG nova.virt.libvirt.host [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.800 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.801 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-03-01T10:04:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='47cd4c38-4c43-414c-bd62-23cc1dc66486',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.802 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.802 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.803 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.803 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.804 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.804 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.806 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Mar  1 05:11:19 np0005634532 podman[269816]: 2026-03-01 10:11:19.804763052 +0000 UTC m=+0.134341885 container init d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.806 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.807 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.807 257053 DEBUG nova.virt.hardware [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Mar  1 05:11:19 np0005634532 podman[269816]: 2026-03-01 10:11:19.812595215 +0000 UTC m=+0.142174038 container start d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:11:19 np0005634532 nova_compute[257049]: 2026-03-01 10:11:19.813 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:19 np0005634532 podman[269816]: 2026-03-01 10:11:19.81564568 +0000 UTC m=+0.145224513 container attach d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:11:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:19.845 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:19 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.032 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:20 np0005634532 vibrant_murdock[269833]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:11:20 np0005634532 vibrant_murdock[269833]: --> All data devices are unavailable
Mar  1 05:11:20 np0005634532 systemd[1]: libpod-d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321.scope: Deactivated successfully.
Mar  1 05:11:20 np0005634532 podman[269816]: 2026-03-01 10:11:20.160857225 +0000 UTC m=+0.490436048 container died d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:11:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6e8f64f3af9df2ee7cf9f165ebef5032cf6702663c2bad02dacffcd3eebed7bf-merged.mount: Deactivated successfully.
Mar  1 05:11:20 np0005634532 podman[269816]: 2026-03-01 10:11:20.198159125 +0000 UTC m=+0.527737958 container remove d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS)
Mar  1 05:11:20 np0005634532 systemd[1]: libpod-conmon-d93ffbc4dda7043444b8924c9f400ab26ed3a4210735b67c241edbed1c996321.scope: Deactivated successfully.
Mar  1 05:11:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:11:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1971207653' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.275 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.310 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.317 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:11:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891388687' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.786014616 +0000 UTC m=+0.041078965 container create 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.786 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.788 257053 DEBUG nova.virt.libvirt.vif [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739639784',display_name='tempest-TestNetworkBasicOps-server-739639784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739639784',id=8,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDb1VGOs/rpxghjXi/MSfe0VNmIpUqy/oQAOp9XU9R8FcuyAMhZa3gtHzRKl1X1xHJ8dMhnFfevv8xcbXp+9/mp7kfPp12Jpwn9Fj99Twlc5F2oAHf5zU6m2bsDY9XibDg==',key_name='tempest-TestNetworkBasicOps-678481847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-z4ej7vf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:11:15Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=f4629c49-d4bd-45fc-8ff5-bf640dc7426b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.788 257053 DEBUG nova.network.os_vif_util [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.789 257053 DEBUG nova.network.os_vif_util [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.790 257053 DEBUG nova.objects.instance [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid f4629c49-d4bd-45fc-8ff5-bf640dc7426b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.808 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] End _get_guest_xml xml=<domain type="kvm">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <uuid>f4629c49-d4bd-45fc-8ff5-bf640dc7426b</uuid>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <name>instance-00000008</name>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <memory>131072</memory>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <vcpu>1</vcpu>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <metadata>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:name>tempest-TestNetworkBasicOps-server-739639784</nova:name>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:creationTime>2026-03-01 10:11:19</nova:creationTime>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:flavor name="m1.nano">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:memory>128</nova:memory>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:disk>1</nova:disk>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:swap>0</nova:swap>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:ephemeral>0</nova:ephemeral>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:vcpus>1</nova:vcpus>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </nova:flavor>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:owner>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:user uuid="054b4e3fa290475c906614f7e45d128f">tempest-TestNetworkBasicOps-1700707940-project-member</nova:user>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:project uuid="aa1916e2334f470ea8eeda213ef84cc5">tempest-TestNetworkBasicOps-1700707940</nova:project>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </nova:owner>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:root type="image" uuid="07f64171-cfd1-4482-a545-07063cf7c3f2"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <nova:ports>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <nova:port uuid="50a9155a-611b-4578-bf54-f7b987efbf4d">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        </nova:port>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </nova:ports>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </nova:instance>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </metadata>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <sysinfo type="smbios">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <system>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="manufacturer">RDO</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="product">OpenStack Compute</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="serial">f4629c49-d4bd-45fc-8ff5-bf640dc7426b</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="uuid">f4629c49-d4bd-45fc-8ff5-bf640dc7426b</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <entry name="family">Virtual Machine</entry>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </system>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </sysinfo>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <os>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <type arch="x86_64" machine="q35">hvm</type>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <boot dev="hd"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <smbios mode="sysinfo"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <features>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <acpi/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <apic/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <vmcoreinfo/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </features>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <clock offset="utc">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <timer name="pit" tickpolicy="delay"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <timer name="rtc" tickpolicy="catchup"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <timer name="hpet" present="no"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </clock>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <cpu mode="host-model" match="exact">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <topology sockets="1" cores="1" threads="1"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </cpu>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  <devices>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <disk type="network" device="disk">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <target dev="vda" bus="virtio"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <disk type="network" device="cdrom">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <target dev="sda" bus="sata"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <interface type="ethernet">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <mac address="fa:16:3e:e6:c5:22"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <driver name="vhost" rx_queue_size="512"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <mtu size="1442"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <target dev="tap50a9155a-61"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </interface>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <serial type="pty">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <log file="/var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/console.log" append="off"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </serial>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <video>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </video>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <input type="tablet" bus="usb"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <rng model="virtio">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <backend model="random">/dev/urandom</backend>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </rng>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <controller type="usb" index="0"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    <memballoon model="virtio">
Mar  1 05:11:20 np0005634532 nova_compute[257049]:      <stats period="10"/>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:    </memballoon>
Mar  1 05:11:20 np0005634532 nova_compute[257049]:  </devices>
Mar  1 05:11:20 np0005634532 nova_compute[257049]: </domain>
Mar  1 05:11:20 np0005634532 nova_compute[257049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.809 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Preparing to wait for external event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.810 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.810 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.810 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.812 257053 DEBUG nova.virt.libvirt.vif [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739639784',display_name='tempest-TestNetworkBasicOps-server-739639784',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739639784',id=8,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDb1VGOs/rpxghjXi/MSfe0VNmIpUqy/oQAOp9XU9R8FcuyAMhZa3gtHzRKl1X1xHJ8dMhnFfevv8xcbXp+9/mp7kfPp12Jpwn9Fj99Twlc5F2oAHf5zU6m2bsDY9XibDg==',key_name='tempest-TestNetworkBasicOps-678481847',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-z4ej7vf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:11:15Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=f4629c49-d4bd-45fc-8ff5-bf640dc7426b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.813 257053 DEBUG nova.network.os_vif_util [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.814 257053 DEBUG nova.network.os_vif_util [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.814 257053 DEBUG os_vif [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.815 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.816 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.817 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.821 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.821 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap50a9155a-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:20 np0005634532 systemd[1]: Started libpod-conmon-3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d.scope.
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.822 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap50a9155a-61, col_values=(('external_ids', {'iface-id': '50a9155a-611b-4578-bf54-f7b987efbf4d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:c5:22', 'vm-uuid': 'f4629c49-d4bd-45fc-8ff5-bf640dc7426b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.824 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:20 np0005634532 NetworkManager[49996]: <info>  [1772359880.8251] manager: (tap50a9155a-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.827 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.830 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.831 257053 INFO os_vif [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61')#033[00m
Mar  1 05:11:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.859808985 +0000 UTC m=+0.114873364 container init 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.768957675 +0000 UTC m=+0.024022024 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.866589472 +0000 UTC m=+0.121653811 container start 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.869890684 +0000 UTC m=+0.124955023 container attach 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:11:20 np0005634532 systemd[1]: libpod-3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d.scope: Deactivated successfully.
Mar  1 05:11:20 np0005634532 tender_keldysh[270033]: 167 167
Mar  1 05:11:20 np0005634532 conmon[270033]: conmon 3e859329fbcc82f15539 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d.scope/container/memory.events
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.872928998 +0000 UTC m=+0.127993347 container died 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:11:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a778d6fc99b808039f5d3c1b0e1ab8d77ed5c585e7a132c0dd3c403e6a23d1e4-merged.mount: Deactivated successfully.
Mar  1 05:11:20 np0005634532 podman[270014]: 2026-03-01 10:11:20.904891917 +0000 UTC m=+0.159956246 container remove 3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=tender_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 05:11:20 np0005634532 systemd[1]: libpod-conmon-3e859329fbcc82f1553973188c37e502eaae24d1f58899fe8b04471595fca01d.scope: Deactivated successfully.
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.913 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.914 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.914 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No VIF found with MAC fa:16:3e:e6:c5:22, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.914 257053 INFO nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Using config drive#033[00m
Mar  1 05:11:20 np0005634532 nova_compute[257049]: 2026-03-01 10:11:20.936 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.040052201 +0000 UTC m=+0.040018318 container create 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:11:21 np0005634532 systemd[1]: Started libpod-conmon-90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2.scope.
Mar  1 05:11:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f74bd3323ad9c48cd0f303ce2aacb99863bc497356c7292ca1af5cda4c0479/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f74bd3323ad9c48cd0f303ce2aacb99863bc497356c7292ca1af5cda4c0479/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f74bd3323ad9c48cd0f303ce2aacb99863bc497356c7292ca1af5cda4c0479/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:21 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b0f74bd3323ad9c48cd0f303ce2aacb99863bc497356c7292ca1af5cda4c0479/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v866: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.10446104 +0000 UTC m=+0.104427157 container init 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.115584884 +0000 UTC m=+0.115551021 container start 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.023469762 +0000 UTC m=+0.023435879 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.120049644 +0000 UTC m=+0.120015741 container attach 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]: {
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:    "0": [
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:        {
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "devices": [
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "/dev/loop3"
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            ],
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "lv_name": "ceph_lv0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "lv_size": "21470642176",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "name": "ceph_lv0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "tags": {
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.cluster_name": "ceph",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.crush_device_class": "",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.encrypted": "0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.osd_id": "0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.type": "block",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.vdo": "0",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:                "ceph.with_tpm": "0"
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            },
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "type": "block",
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:            "vg_name": "ceph_vg0"
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:        }
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]:    ]
Mar  1 05:11:21 np0005634532 infallible_kepler[270096]: }
Mar  1 05:11:21 np0005634532 systemd[1]: libpod-90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2.scope: Deactivated successfully.
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.36273978 +0000 UTC m=+0.362705877 container died 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:11:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b0f74bd3323ad9c48cd0f303ce2aacb99863bc497356c7292ca1af5cda4c0479-merged.mount: Deactivated successfully.
Mar  1 05:11:21 np0005634532 podman[270079]: 2026-03-01 10:11:21.400077771 +0000 UTC m=+0.400043868 container remove 90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=infallible_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 05:11:21 np0005634532 systemd[1]: libpod-conmon-90262cfc86a47891349084dc38593e1c1b0d62da1416c39001aded814f0324b2.scope: Deactivated successfully.
Mar  1 05:11:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:21 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.548 257053 DEBUG nova.network.neutron [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updated VIF entry in instance network info cache for port 50a9155a-611b-4578-bf54-f7b987efbf4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.549 257053 DEBUG nova.network.neutron [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updating instance_info_cache with network_info: [{"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.579 257053 DEBUG oslo_concurrency.lockutils [req-78c26c18-7643-4443-a7e1-e3527ba9d27a req-0336a521-4fcb-42ec-af53-d83f3ce10001 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.645 257053 INFO nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Creating config drive at /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.647 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3unyzi4c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:21.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:21 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.765 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp3unyzi4c" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.789 257053 DEBUG nova.storage.rbd_utils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.793 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:21.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.860916209 +0000 UTC m=+0.034234966 container create 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:11:21 np0005634532 systemd[1]: Started libpod-conmon-30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167.scope.
Mar  1 05:11:21 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.921679197 +0000 UTC m=+0.094997964 container init 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.92626172 +0000 UTC m=+0.099580467 container start 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:11:21 np0005634532 compassionate_boyd[270267]: 167 167
Mar  1 05:11:21 np0005634532 systemd[1]: libpod-30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167.scope: Deactivated successfully.
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.930589297 +0000 UTC m=+0.103908164 container attach 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.930921255 +0000 UTC m=+0.104240012 container died 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.936 257053 DEBUG oslo_concurrency.processutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config f4629c49-d4bd-45fc-8ff5-bf640dc7426b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.937 257053 INFO nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Deleting local config drive /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b/disk.config because it was imported into RBD.#033[00m
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.846099993 +0000 UTC m=+0.019418770 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:21 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-bec18bd63fa7e4fcfe094753143e34e6348a7fea9fd0740af60fc229037c3e27-merged.mount: Deactivated successfully.
Mar  1 05:11:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:21 np0005634532 podman[270232]: 2026-03-01 10:11:21.964518204 +0000 UTC m=+0.137836961 container remove 30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=compassionate_boyd, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:11:21 np0005634532 systemd[1]: libpod-conmon-30e8a11802eb6c5514a20559c1aff79c4a2e7750f37ae24a87b7b1f1c1b69167.scope: Deactivated successfully.
Mar  1 05:11:21 np0005634532 kernel: tap50a9155a-61: entered promiscuous mode
Mar  1 05:11:21 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:21Z|00045|binding|INFO|Claiming lport 50a9155a-611b-4578-bf54-f7b987efbf4d for this chassis.
Mar  1 05:11:21 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:21Z|00046|binding|INFO|50a9155a-611b-4578-bf54-f7b987efbf4d: Claiming fa:16:3e:e6:c5:22 10.100.0.12
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.980 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:21 np0005634532 NetworkManager[49996]: <info>  [1772359881.9818] manager: (tap50a9155a-61): new Tun device (/org/freedesktop/NetworkManager/Devices/37)
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.984 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:21 np0005634532 nova_compute[257049]: 2026-03-01 10:11:21.987 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:21 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:21.998 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c5:22 10.100.0.12'], port_security=['fa:16:3e:e6:c5:22 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1216249567', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f4629c49-d4bd-45fc-8ff5-bf640dc7426b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-537268c9-9cf2-4b21-8842-a79772874e8d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1216249567', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '57c385b8-e9ac-4d07-98e8-7eb05503eac5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36f556cb-7cc8-4f26-a519-8e2b56d3b18f, chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=50a9155a-611b-4578-bf54-f7b987efbf4d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:11:21 np0005634532 systemd-machined[221390]: New machine qemu-3-instance-00000008.
Mar  1 05:11:21 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:21.999 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 50a9155a-611b-4578-bf54-f7b987efbf4d in datapath 537268c9-9cf2-4b21-8842-a79772874e8d bound to our chassis#033[00m
Mar  1 05:11:21 np0005634532 systemd-udevd[270298]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.000 167541 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 537268c9-9cf2-4b21-8842-a79772874e8d#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: Started Virtual Machine qemu-3-instance-00000008.
Mar  1 05:11:22 np0005634532 NetworkManager[49996]: <info>  [1772359882.0105] device (tap50a9155a-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 05:11:22 np0005634532 NetworkManager[49996]: <info>  [1772359882.0112] device (tap50a9155a-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.009 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[1f739e8c-aeba-4341-ba0d-648c512248c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.011 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap537268c9-91 in ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Mar  1 05:11:22 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:22Z|00047|binding|INFO|Setting lport 50a9155a-611b-4578-bf54-f7b987efbf4d ovn-installed in OVS
Mar  1 05:11:22 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:22Z|00048|binding|INFO|Setting lport 50a9155a-611b-4578-bf54-f7b987efbf4d up in Southbound
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.014 262878 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap537268c9-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.014 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c00fb9-ea47-4634-abf6-421c13bcbcc4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.015 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.015 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f79dbb8a-2ecb-4aa1-870b-742e2ec2bb53]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.024 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[e920572e-df54-43fd-a229-58f9fdfaa74c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.043 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1cb629-3f8f-4145-a84e-1abb7e76727e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.064 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a5e81a-7c2e-4ca1-bbc7-d037c68584b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 systemd-udevd[270301]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:11:22 np0005634532 NetworkManager[49996]: <info>  [1772359882.0700] manager: (tap537268c9-90): new Veth device (/org/freedesktop/NetworkManager/Devices/38)
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.071 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[54c22415-e2c2-47b3-ae8a-5dfd2dc3987e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.095 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a750e4-3822-4a3a-910f-a2b8b959ee30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.098 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[f5d711c3-8c85-4e50-a8b6-b0ecf3e42343]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.103649196 +0000 UTC m=+0.040276475 container create bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:11:22 np0005634532 NetworkManager[49996]: <info>  [1772359882.1168] device (tap537268c9-90): carrier: link connected
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.121 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[f4f4b921-0185-42a2-9cce-8317234e87a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.134 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[d2f199f3-c153-4335-8caa-63bcd07b095c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap537268c9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f2:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423870, 'reachable_time': 16698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270349, 'error': None, 'target': 'ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.147 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[79110879-fae5-47f1-b70b-94ffd3ed8205]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:f272'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 423870, 'tstamp': 423870}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 270353, 'error': None, 'target': 'ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: Started libpod-conmon-bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e.scope.
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.160 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[e6ffecf3-59a6-4d0b-86f1-c024e92141f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap537268c9-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:f2:72'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 19], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423870, 'reachable_time': 16698, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 270355, 'error': None, 'target': 'ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e4efb3174a927a57aaffdebb6b17fe1e3f2634e11eaf06cf114e6f52024a42/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e4efb3174a927a57aaffdebb6b17fe1e3f2634e11eaf06cf114e6f52024a42/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e4efb3174a927a57aaffdebb6b17fe1e3f2634e11eaf06cf114e6f52024a42/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94e4efb3174a927a57aaffdebb6b17fe1e3f2634e11eaf06cf114e6f52024a42/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.179 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[eadef7ba-d1f8-4fbf-a089-fc688aea831b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.088666306 +0000 UTC m=+0.025293605 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.193666406 +0000 UTC m=+0.130293705 container init bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True)
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.200151266 +0000 UTC m=+0.136778555 container start bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.203318274 +0000 UTC m=+0.139945583 container attach bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.225 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[5a909b69-f02c-4d05-b6de-d35bc948dd8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.227 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap537268c9-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.227 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.228 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap537268c9-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:22 np0005634532 NetworkManager[49996]: <info>  [1772359882.2302] manager: (tap537268c9-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/39)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.229 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 kernel: tap537268c9-90: entered promiscuous mode
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.231 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.232 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap537268c9-90, col_values=(('external_ids', {'iface-id': '9a82f7b0-6cfc-4712-9a67-69f2742fdc81'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.233 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:22Z|00049|binding|INFO|Releasing lport 9a82f7b0-6cfc-4712-9a67-69f2742fdc81 from this chassis (sb_readonly=0)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.234 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.234 167541 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/537268c9-9cf2-4b21-8842-a79772874e8d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/537268c9-9cf2-4b21-8842-a79772874e8d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.239 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.239 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[10c0e40f-9c0e-4878-b94c-d105ae766d67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.240 167541 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: global
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    log         /dev/log local0 debug
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    log-tag     haproxy-metadata-proxy-537268c9-9cf2-4b21-8842-a79772874e8d
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    user        root
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    group       root
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    maxconn     1024
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    pidfile     /var/lib/neutron/external/pids/537268c9-9cf2-4b21-8842-a79772874e8d.pid.haproxy
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    daemon
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: defaults
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    log global
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    mode http
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    option httplog
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    option dontlognull
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    option http-server-close
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    option forwardfor
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    retries                 3
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    timeout http-request    30s
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    timeout connect         30s
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    timeout client          32s
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    timeout server          32s
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    timeout http-keep-alive 30s
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: listen listener
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    bind 169.254.169.254:80
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    server metadata /var/lib/neutron/metadata_proxy
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]:    http-request add-header X-OVN-Network-ID 537268c9-9cf2-4b21-8842-a79772874e8d
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Mar  1 05:11:22 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:22.242 167541 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d', 'env', 'PROCESS_TAG=haproxy-537268c9-9cf2-4b21-8842-a79772874e8d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/537268c9-9cf2-4b21-8842-a79772874e8d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Mar  1 05:11:22 np0005634532 podman[270421]: 2026-03-01 10:11:22.535681533 +0000 UTC m=+0.047025721 container create b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0)
Mar  1 05:11:22 np0005634532 systemd[1]: Started libpod-conmon-b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc.scope.
Mar  1 05:11:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:11:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/640aa1bccf6051cb4e1532d52156b4e350830b41e0552fd681f8a8e9c037b150/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.605 257053 DEBUG nova.compute.manager [req-407abc40-f71d-4e27-aaf0-bb079c19dbf5 req-20f287a2-c4e7-4a85-811b-50958e70ad49 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.606 257053 DEBUG oslo_concurrency.lockutils [req-407abc40-f71d-4e27-aaf0-bb079c19dbf5 req-20f287a2-c4e7-4a85-811b-50958e70ad49 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.606 257053 DEBUG oslo_concurrency.lockutils [req-407abc40-f71d-4e27-aaf0-bb079c19dbf5 req-20f287a2-c4e7-4a85-811b-50958e70ad49 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.607 257053 DEBUG oslo_concurrency.lockutils [req-407abc40-f71d-4e27-aaf0-bb079c19dbf5 req-20f287a2-c4e7-4a85-811b-50958e70ad49 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.607 257053 DEBUG nova.compute.manager [req-407abc40-f71d-4e27-aaf0-bb079c19dbf5 req-20f287a2-c4e7-4a85-811b-50958e70ad49 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Processing event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Mar  1 05:11:22 np0005634532 podman[270421]: 2026-03-01 10:11:22.513851274 +0000 UTC m=+0.025195492 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 05:11:22 np0005634532 podman[270421]: 2026-03-01 10:11:22.61424481 +0000 UTC m=+0.125589028 container init b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Mar  1 05:11:22 np0005634532 podman[270421]: 2026-03-01 10:11:22.619188562 +0000 UTC m=+0.130532750 container start b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, tcib_managed=true)
Mar  1 05:11:22 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [NOTICE]   (270470) : New worker (270473) forked
Mar  1 05:11:22 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [NOTICE]   (270470) : Loading success.
Mar  1 05:11:22 np0005634532 lvm[270520]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:11:22 np0005634532 lvm[270520]: VG ceph_vg0 finished
Mar  1 05:11:22 np0005634532 wonderful_mcnulty[270356]: {}
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.812 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.813 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359882.813023, f4629c49-d4bd-45fc-8ff5-bf640dc7426b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.813 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] VM Started (Lifecycle Event)#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.817 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.820 257053 INFO nova.virt.libvirt.driver [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Instance spawned successfully.#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.820 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: libpod-bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e.scope: Deactivated successfully.
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.825738337 +0000 UTC m=+0.762365616 container died bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.850 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.856 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.861 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.862 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.862 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.863 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 podman[270316]: 2026-03-01 10:11:22.863640452 +0000 UTC m=+0.800267731 container remove bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.865 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.865 257053 DEBUG nova.virt.libvirt.driver [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-94e4efb3174a927a57aaffdebb6b17fe1e3f2634e11eaf06cf114e6f52024a42-merged.mount: Deactivated successfully.
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.874 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.875 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359882.8131561, f4629c49-d4bd-45fc-8ff5-bf640dc7426b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.875 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] VM Paused (Lifecycle Event)#033[00m
Mar  1 05:11:22 np0005634532 systemd[1]: libpod-conmon-bed753924323e95cf5c5c5e66fc9a473feaeb2c07c77d6c9b46dcf17e5e60f1e.scope: Deactivated successfully.
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.897 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.899 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772359882.8168385, f4629c49-d4bd-45fc-8ff5-bf640dc7426b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.900 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] VM Resumed (Lifecycle Event)#033[00m
Mar  1 05:11:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:11:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.918 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.924 257053 INFO nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Took 6.94 seconds to spawn the instance on the hypervisor.#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.924 257053 DEBUG nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.926 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.957 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:11:22 np0005634532 nova_compute[257049]: 2026-03-01 10:11:22.987 257053 INFO nova.compute.manager [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Took 7.77 seconds to build instance.#033[00m
Mar  1 05:11:23 np0005634532 nova_compute[257049]: 2026-03-01 10:11:23.004 257053 DEBUG oslo_concurrency.lockutils [None req-d63543cf-60ec-419e-a872-508d42e36320 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v867: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:11:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:23 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:23.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:23 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:23.849 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:23.884 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:23.885 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:23.885 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:23 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:23 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:11:23 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:23 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.682 257053 DEBUG nova.compute.manager [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.682 257053 DEBUG oslo_concurrency.lockutils [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.682 257053 DEBUG oslo_concurrency.lockutils [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.683 257053 DEBUG oslo_concurrency.lockutils [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.683 257053 DEBUG nova.compute.manager [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] No waiting events found dispatching network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:11:24 np0005634532 nova_compute[257049]: 2026-03-01 10:11:24.683 257053 WARNING nova.compute.manager [req-2c17a422-62ae-4190-99b3-063e6d70e6e9 req-05e2f782-9fc5-4062-8d8b-5defe6d090bd 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received unexpected event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d for instance with vm_state active and task_state None.#033[00m
Mar  1 05:11:25 np0005634532 nova_compute[257049]: 2026-03-01 10:11:25.034 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v868: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 953 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Mar  1 05:11:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:25 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:25.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:25 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:25 np0005634532 nova_compute[257049]: 2026-03-01 10:11:25.824 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:25.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:25 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:25 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:26 np0005634532 nova_compute[257049]: 2026-03-01 10:11:26.519 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:26 np0005634532 nova_compute[257049]: 2026-03-01 10:11:26.540 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Triggering sync for uuid f4629c49-d4bd-45fc-8ff5-bf640dc7426b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Mar  1 05:11:26 np0005634532 nova_compute[257049]: 2026-03-01 10:11:26.541 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:26 np0005634532 nova_compute[257049]: 2026-03-01 10:11:26.541 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:26 np0005634532 nova_compute[257049]: 2026-03-01 10:11:26.565 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:11:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v869: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 952 KiB/s rd, 1.8 MiB/s wr, 67 op/s
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:27.237Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:27.238Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:27 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:27.672 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:27 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d40016a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:27.853 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:27 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:28Z|00050|binding|INFO|Releasing lport 9a82f7b0-6cfc-4712-9a67-69f2742fdc81 from this chassis (sb_readonly=0)
Mar  1 05:11:28 np0005634532 nova_compute[257049]: 2026-03-01 10:11:28.374 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:28 np0005634532 NetworkManager[49996]: <info>  [1772359888.3767] manager: (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Mar  1 05:11:28 np0005634532 NetworkManager[49996]: <info>  [1772359888.3782] manager: (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Mar  1 05:11:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:28Z|00051|binding|INFO|Releasing lport 9a82f7b0-6cfc-4712-9a67-69f2742fdc81 from this chassis (sb_readonly=0)
Mar  1 05:11:28 np0005634532 nova_compute[257049]: 2026-03-01 10:11:28.394 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:28 np0005634532 nova_compute[257049]: 2026-03-01 10:11:28.398 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v870: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Mar  1 05:11:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:29 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.643 257053 DEBUG nova.compute.manager [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-changed-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.644 257053 DEBUG nova.compute.manager [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Refreshing instance network info cache due to event network-changed-50a9155a-611b-4578-bf54-f7b987efbf4d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.645 257053 DEBUG oslo_concurrency.lockutils [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.645 257053 DEBUG oslo_concurrency.lockutils [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.646 257053 DEBUG nova.network.neutron [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Refreshing network info cache for port 50a9155a-611b-4578-bf54-f7b987efbf4d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:11:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:29.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:29 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.851 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.852 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.852 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.852 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.853 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.854 257053 INFO nova.compute.manager [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Terminating instance#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.854 257053 DEBUG nova.compute.manager [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Mar  1 05:11:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:29.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:29 np0005634532 kernel: tap50a9155a-61 (unregistering): left promiscuous mode
Mar  1 05:11:29 np0005634532 NetworkManager[49996]: <info>  [1772359889.8920] device (tap50a9155a-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.945 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:29 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:29Z|00052|binding|INFO|Releasing lport 50a9155a-611b-4578-bf54-f7b987efbf4d from this chassis (sb_readonly=0)
Mar  1 05:11:29 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:29Z|00053|binding|INFO|Setting lport 50a9155a-611b-4578-bf54-f7b987efbf4d down in Southbound
Mar  1 05:11:29 np0005634532 ovn_controller[157082]: 2026-03-01T10:11:29Z|00054|binding|INFO|Removing iface tap50a9155a-61 ovn-installed in OVS
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.947 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:29 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:29.953 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:c5:22 10.100.0.12'], port_security=['fa:16:3e:e6:c5:22 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-1216249567', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'f4629c49-d4bd-45fc-8ff5-bf640dc7426b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-537268c9-9cf2-4b21-8842-a79772874e8d', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-1216249567', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '57c385b8-e9ac-4d07-98e8-7eb05503eac5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.196'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36f556cb-7cc8-4f26-a519-8e2b56d3b18f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=50a9155a-611b-4578-bf54-f7b987efbf4d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f611def4670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:11:29 np0005634532 nova_compute[257049]: 2026-03-01 10:11:29.952 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:29 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:29 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:29.954 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 50a9155a-611b-4578-bf54-f7b987efbf4d in datapath 537268c9-9cf2-4b21-8842-a79772874e8d unbound from our chassis#033[00m
Mar  1 05:11:29 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:29.956 167541 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 537268c9-9cf2-4b21-8842-a79772874e8d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Mar  1 05:11:29 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:29.957 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3af1c8-3333-4d0e-8289-7adb4cc90f40]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:29 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:29.957 167541 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d namespace which is not needed anymore#033[00m
Mar  1 05:11:29 np0005634532 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Deactivated successfully.
Mar  1 05:11:29 np0005634532 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000008.scope: Consumed 8.025s CPU time.
Mar  1 05:11:29 np0005634532 systemd-machined[221390]: Machine qemu-3-instance-00000008 terminated.
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.036 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [NOTICE]   (270470) : haproxy version is 2.8.14-c23fe91
Mar  1 05:11:30 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [NOTICE]   (270470) : path to executable is /usr/sbin/haproxy
Mar  1 05:11:30 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [WARNING]  (270470) : Exiting Master process...
Mar  1 05:11:30 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [ALERT]    (270470) : Current worker (270473) exited with code 143 (Terminated)
Mar  1 05:11:30 np0005634532 neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d[270460]: [WARNING]  (270470) : All workers exited. Exiting... (0)
Mar  1 05:11:30 np0005634532 systemd[1]: libpod-b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc.scope: Deactivated successfully.
Mar  1 05:11:30 np0005634532 conmon[270460]: conmon b2ae74d937566b24c822 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc.scope/container/memory.events
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.070 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 podman[270602]: 2026-03-01 10:11:30.071322698 +0000 UTC m=+0.042384897 container died b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.074 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc-userdata-shm.mount: Deactivated successfully.
Mar  1 05:11:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-640aa1bccf6051cb4e1532d52156b4e350830b41e0552fd681f8a8e9c037b150-merged.mount: Deactivated successfully.
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.097 257053 INFO nova.virt.libvirt.driver [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Instance destroyed successfully.#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.098 257053 DEBUG nova.objects.instance [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'resources' on Instance uuid f4629c49-d4bd-45fc-8ff5-bf640dc7426b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:11:30 np0005634532 podman[270602]: 2026-03-01 10:11:30.10585903 +0000 UTC m=+0.076921229 container cleanup b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:11:30 np0005634532 systemd[1]: libpod-conmon-b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc.scope: Deactivated successfully.
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.110 257053 DEBUG nova.virt.libvirt.vif [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-03-01T10:11:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-739639784',display_name='tempest-TestNetworkBasicOps-server-739639784',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-739639784',id=8,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDb1VGOs/rpxghjXi/MSfe0VNmIpUqy/oQAOp9XU9R8FcuyAMhZa3gtHzRKl1X1xHJ8dMhnFfevv8xcbXp+9/mp7kfPp12Jpwn9Fj99Twlc5F2oAHf5zU6m2bsDY9XibDg==',key_name='tempest-TestNetworkBasicOps-678481847',keypairs=<?>,launch_index=0,launched_at=2026-03-01T10:11:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-z4ej7vf2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-03-01T10:11:22Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=f4629c49-d4bd-45fc-8ff5-bf640dc7426b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.111 257053 DEBUG nova.network.os_vif_util [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.112 257053 DEBUG nova.network.os_vif_util [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.112 257053 DEBUG os_vif [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.113 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.114 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap50a9155a-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.115 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.116 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.118 257053 INFO os_vif [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:c5:22,bridge_name='br-int',has_traffic_filtering=True,id=50a9155a-611b-4578-bf54-f7b987efbf4d,network=Network(537268c9-9cf2-4b21-8842-a79772874e8d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap50a9155a-61')#033[00m
Mar  1 05:11:30 np0005634532 podman[270638]: 2026-03-01 10:11:30.154675244 +0000 UTC m=+0.032531604 container remove b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.158 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[3b4b6efa-9c9d-4a2e-90af-cb4a8d61c375]: (4, ('Sun Mar  1 10:11:30 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d (b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc)\nb2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc\nSun Mar  1 10:11:30 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d (b2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc)\nb2ae74d937566b24c82204104cad72c612a002061b8959dd7729fe540bdc86cc\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.161 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[270190f8-6142-4fee-b319-b8771a16a244]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.162 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap537268c9-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.163 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 kernel: tap537268c9-90: left promiscuous mode
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.168 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.171 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[5760a52a-5dfc-4fac-966d-8e601572b6e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.185 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[66268aa7-a5e1-4dea-a1e5-6a25f06e8e31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.186 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[30607122-4150-439d-87b1-890513757cb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.198 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[b8738c7e-a30f-4f89-8445-d81dea394641]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 423865, 'reachable_time': 22568, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 270672, 'error': None, 'target': 'ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.200 167914 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-537268c9-9cf2-4b21-8842-a79772874e8d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Mar  1 05:11:30 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:30.200 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[24ed33cf-e154-4330-9718-2c492c205c57]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:11:30 np0005634532 systemd[1]: run-netns-ovnmeta\x2d537268c9\x2d9cf2\x2d4b21\x2d8842\x2da79772874e8d.mount: Deactivated successfully.
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.501 257053 INFO nova.virt.libvirt.driver [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Deleting instance files /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b_del#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.502 257053 INFO nova.virt.libvirt.driver [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Deletion of /var/lib/nova/instances/f4629c49-d4bd-45fc-8ff5-bf640dc7426b_del complete#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.555 257053 INFO nova.compute.manager [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Took 0.70 seconds to destroy the instance on the hypervisor.#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.556 257053 DEBUG oslo.service.loopingcall [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.556 257053 DEBUG nova.compute.manager [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Mar  1 05:11:30 np0005634532 nova_compute[257049]: 2026-03-01 10:11:30.556 257053 DEBUG nova.network.neutron [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Mar  1 05:11:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v871: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.245 257053 DEBUG nova.network.neutron [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updated VIF entry in instance network info cache for port 50a9155a-611b-4578-bf54-f7b987efbf4d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.246 257053 DEBUG nova.network.neutron [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updating instance_info_cache with network_info: [{"id": "50a9155a-611b-4578-bf54-f7b987efbf4d", "address": "fa:16:3e:e6:c5:22", "network": {"id": "537268c9-9cf2-4b21-8842-a79772874e8d", "bridge": "br-int", "label": "tempest-network-smoke--1332539177", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.196", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap50a9155a-61", "ovs_interfaceid": "50a9155a-611b-4578-bf54-f7b987efbf4d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.266 257053 DEBUG oslo_concurrency.lockutils [req-5fe4aa37-dbc3-4459-8807-89a23c369627 req-c25a52f9-41c3-47c5-b1ff-f1dd66103389 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-f4629c49-d4bd-45fc-8ff5-bf640dc7426b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:11:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:31 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d8003590 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:31.678 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.708 257053 DEBUG nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-vif-unplugged-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.709 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.709 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.709 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.709 257053 DEBUG nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] No waiting events found dispatching network-vif-unplugged-50a9155a-611b-4578-bf54-f7b987efbf4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.709 257053 DEBUG nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-vif-unplugged-50a9155a-611b-4578-bf54-f7b987efbf4d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.710 257053 DEBUG nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.710 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.710 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.710 257053 DEBUG oslo_concurrency.lockutils [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.710 257053 DEBUG nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] No waiting events found dispatching network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:11:31 np0005634532 nova_compute[257049]: 2026-03-01 10:11:31.711 257053 WARNING nova.compute.manager [req-b8916ae6-23fa-4ae7-93f8-03622f695f27 req-864ac7e9-18f6-4775-8927-8221f934d620 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Received unexpected event network-vif-plugged-50a9155a-611b-4578-bf54-f7b987efbf4d for instance with vm_state active and task_state deleting.#033[00m
Mar  1 05:11:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:31 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:31.857 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:31 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:31 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Mar  1 05:11:32 np0005634532 nova_compute[257049]: 2026-03-01 10:11:32.478 257053 DEBUG nova.network.neutron [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:11:32 np0005634532 nova_compute[257049]: 2026-03-01 10:11:32.508 257053 INFO nova.compute.manager [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Took 1.95 seconds to deallocate network for instance.#033[00m
Mar  1 05:11:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:11:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:11:32 np0005634532 nova_compute[257049]: 2026-03-01 10:11:32.570 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:32 np0005634532 nova_compute[257049]: 2026-03-01 10:11:32.571 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:32 np0005634532 nova_compute[257049]: 2026-03-01 10:11:32.627 257053 DEBUG oslo_concurrency.processutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:33 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:11:33 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043256717' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.042 257053 DEBUG oslo_concurrency.processutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.415s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.051 257053 DEBUG nova.compute.provider_tree [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.068 257053 DEBUG nova.scheduler.client.report [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.100 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.529s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v872: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.141 257053 INFO nova.scheduler.client.report [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Deleted allocations for instance f4629c49-d4bd-45fc-8ff5-bf640dc7426b#033[00m
Mar  1 05:11:33 np0005634532 nova_compute[257049]: 2026-03-01 10:11:33.230 257053 DEBUG oslo_concurrency.lockutils [None req-697118e0-8681-4ba3-973d-a1ae268f6ffe 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "f4629c49-d4bd-45fc-8ff5-bf640dc7426b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.378s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:33 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:33.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:33 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:33.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:33 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:33 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:35 np0005634532 nova_compute[257049]: 2026-03-01 10:11:35.037 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v873: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 100 op/s
Mar  1 05:11:35 np0005634532 nova_compute[257049]: 2026-03-01 10:11:35.115 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:35 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:11:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:35.684 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:11:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:35 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4002b10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:35.861 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:35 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:35 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:37] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:11:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:37] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:11:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v874: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 59 op/s
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:37.238Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:37.239Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:37 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:37.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:37 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:37.863 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:37 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v875: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.2 KiB/s wr, 59 op/s
Mar  1 05:11:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:39 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:39.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:39 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:39.865 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:39 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:39 np0005634532 nova_compute[257049]: 2026-03-01 10:11:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:39 np0005634532 nova_compute[257049]: 2026-03-01 10:11:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:39 np0005634532 nova_compute[257049]: 2026-03-01 10:11:39.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:11:40 np0005634532 nova_compute[257049]: 2026-03-01 10:11:40.038 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:40 np0005634532 nova_compute[257049]: 2026-03-01 10:11:40.117 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:40 np0005634532 podman[270735]: 2026-03-01 10:11:40.415478626 +0000 UTC m=+0.098777977 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:11:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v876: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Mar  1 05:11:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:41 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:41.692 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:41 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:41.866 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:41 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:41 np0005634532 nova_compute[257049]: 2026-03-01 10:11:41.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:42 np0005634532 nova_compute[257049]: 2026-03-01 10:11:42.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:42 np0005634532 nova_compute[257049]: 2026-03-01 10:11:42.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v877: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Mar  1 05:11:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:43 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:43.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:43 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:43.868 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:43 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:43 np0005634532 nova_compute[257049]: 2026-03-01 10:11:43.973 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:44 np0005634532 nova_compute[257049]: 2026-03-01 10:11:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:44 np0005634532 nova_compute[257049]: 2026-03-01 10:11:44.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.007 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.007 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.008 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.008 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.008 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.041 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.091 257053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772359890.090241, f4629c49-d4bd-45fc-8ff5-bf640dc7426b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.092 257053 INFO nova.compute.manager [-] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] VM Stopped (Lifecycle Event)#033[00m
Mar  1 05:11:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v878: 353 pgs: 353 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.118 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.122 257053 DEBUG nova.compute.manager [None req-d36d53b8-32b0-4ad3-8ad7-ffce37f07eb3 - - - - - -] [instance: f4629c49-d4bd-45fc-8ff5-bf640dc7426b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:11:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:11:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3169817056' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.454 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:45 np0005634532 podman[270788]: 2026-03-01 10:11:45.550461186 +0000 UTC m=+0.062064482 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true)
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.621 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.622 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4542MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.622 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.622 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.680 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.680 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.694 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:11:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:45.700 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.712 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.713 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.733 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:11:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.769 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:11:45 np0005634532 nova_compute[257049]: 2026-03-01 10:11:45.788 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:11:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:45.870 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:45 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:45 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:11:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/732794384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:11:46 np0005634532 nova_compute[257049]: 2026-03-01 10:11:46.192 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:11:46 np0005634532 nova_compute[257049]: 2026-03-01 10:11:46.198 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:11:46 np0005634532 nova_compute[257049]: 2026-03-01 10:11:46.214 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:11:46 np0005634532 nova_compute[257049]: 2026-03-01 10:11:46.240 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:11:46 np0005634532 nova_compute[257049]: 2026-03-01 10:11:46.241 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:11:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:47] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v879: 353 pgs: 353 active+clean; 88 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Mar  1 05:11:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:47.241Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:11:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:11:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:11:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_11] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:11:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:11:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:47.702 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37c4003c10 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:47.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:47 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_12] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v880: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:11:49 np0005634532 nova_compute[257049]: 2026-03-01 10:11:49.241 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:11:49 np0005634532 nova_compute[257049]: 2026-03-01 10:11:49.242 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:11:49 np0005634532 nova_compute[257049]: 2026-03-01 10:11:49.242 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:11:49 np0005634532 nova_compute[257049]: 2026-03-01 10:11:49.260 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:11:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:49.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:49.874 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:49 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0001b90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:50 np0005634532 nova_compute[257049]: 2026-03-01 10:11:50.044 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:50 np0005634532 nova_compute[257049]: 2026-03-01 10:11:50.120 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v881: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:11:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:51.707 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:51.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:51 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:51 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v882: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Mar  1 05:11:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0001b90 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101153 (4) : Server backend/nfs.cephfs.1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 1 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:11:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:53.711 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:53.878 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:53 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:53 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:55 np0005634532 nova_compute[257049]: 2026-03-01 10:11:55.045 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v883: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Mar  1 05:11:55 np0005634532 nova_compute[257049]: 2026-03-01 10:11:55.122 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:55.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0002510 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:55.879 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:55 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:55 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0001090 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:56 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:56.471 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:11:56 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:11:56.472 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:11:56 np0005634532 nova_compute[257049]: 2026-03-01 10:11:56.473 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:11:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:11:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:57] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Mar  1 05:11:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:11:57] "GET /metrics HTTP/1.1" 200 48476 "" "Prometheus/2.51.0"
Mar  1 05:11:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v884: 353 pgs: 353 active+clean; 88 MiB data, 281 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Mar  1 05:11:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:11:57.242Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:11:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:11:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:57.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:11:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:57.882 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:57 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0002510 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:11:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584087932' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:11:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:11:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2584087932' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:11:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v885: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 110 op/s
Mar  1 05:11:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:11:59.719 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:11:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:11:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:11:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:11:59.884 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:11:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:11:59 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:00 np0005634532 nova_compute[257049]: 2026-03-01 10:12:00.047 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:00 np0005634532 nova_compute[257049]: 2026-03-01 10:12:00.124 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v886: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Mar  1 05:12:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003220 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:01.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:01.886 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:01 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:01 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:02 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:02.476 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:12:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:12:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:12:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v887: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Mar  1 05:12:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:03.725 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003220 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:03.888 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:03 np0005634532 nova_compute[257049]: 2026-03-01 10:12:03.898 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:03 np0005634532 nova_compute[257049]: 2026-03-01 10:12:03.922 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:03 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:05 np0005634532 nova_compute[257049]: 2026-03-01 10:12:05.049 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v888: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 KiB/s wr, 95 op/s
Mar  1 05:12:05 np0005634532 nova_compute[257049]: 2026-03-01 10:12:05.126 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:05.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6c0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:05.890 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:05 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:12:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:12:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v889: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:07.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:07.243Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:07.244Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:07.731 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:07.892 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:07 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a6e0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v890: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 26 op/s
Mar  1 05:12:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:09.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:09.894 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:09 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:10 np0005634532 nova_compute[257049]: 2026-03-01 10:12:10.052 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:10 np0005634532 nova_compute[257049]: 2026-03-01 10:12:10.128 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v891: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:11 np0005634532 podman[270884]: 2026-03-01 10:12:11.431215443 +0000 UTC m=+0.108857087 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Mar  1 05:12:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a700 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:11.736 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:11.896 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:11 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:11 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v892: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:13 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:13.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:13 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:13 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a720 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:13.898 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:14 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_2] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f0003f30 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:15 np0005634532 nova_compute[257049]: 2026-03-01 10:12:15.053 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v893: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 170 B/s rd, 0 op/s
Mar  1 05:12:15 np0005634532 nova_compute[257049]: 2026-03-01 10:12:15.129 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:15 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_14] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d0003240 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:15 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:15 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_13] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37d80042a0 fd 42 proxy header rest len failed header rlen = % (will set dead)
Mar  1 05:12:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:15.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:16.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:16 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[268971]: 01/03/2026 10:12:16 : epoch 69a41094 : compute-0 : ganesha.nfsd-2[svc_7] rpc :TIRPC :EVENT :svc_vc_recv: 0x7f37f800a740 fd 42 proxy ignored for local
Mar  1 05:12:16 np0005634532 kernel: ganesha.nfsd[269124]: segfault at 50 ip 00007f387bac232e sp 00007f3806ffc210 error 4 in libntirpc.so.5.8[7f387baa7000+2c000] likely on CPU 0 (core 0, socket 0)
Mar  1 05:12:16 np0005634532 kernel: Code: 47 20 66 41 89 86 f2 00 00 00 41 bf 01 00 00 00 b9 40 00 00 00 e9 af fd ff ff 66 90 48 8b 85 f8 00 00 00 48 8b 40 08 4c 8b 28 <45> 8b 65 50 49 8b 75 68 41 8b be 28 02 00 00 b9 40 00 00 00 e8 29
Mar  1 05:12:16 np0005634532 systemd[1]: Started Process Core Dump (PID 270929/UID 0).
Mar  1 05:12:16 np0005634532 podman[270919]: 2026-03-01 10:12:16.361805982 +0000 UTC m=+0.051160933 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Mar  1 05:12:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v894: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:17 np0005634532 systemd-coredump[270933]: Process 268976 (ganesha.nfsd) of user 0 dumped core.#012#012Stack trace of thread 41:#012#0  0x00007f387bac232e n/a (/usr/lib64/libntirpc.so.5.8 + 0x2232e)#012ELF object binary architecture: AMD x86-64
Mar  1 05:12:17 np0005634532 systemd[1]: systemd-coredump@12-270929-0.service: Deactivated successfully.
Mar  1 05:12:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:17.245Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:17.245Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:17 np0005634532 podman[270945]: 2026-03-01 10:12:17.251677612 +0000 UTC m=+0.037976228 container died 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1)
Mar  1 05:12:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-986eaac5c6a689e872d237e9473f17c90630a58def21f55fa6bac967b365e478-merged.mount: Deactivated successfully.
Mar  1 05:12:17 np0005634532 podman[270945]: 2026-03-01 10:12:17.287595058 +0000 UTC m=+0.073893674 container remove 2394c8dd853dfd8ff8bfec80584248f07e107aca9cd3cc363301fbd2b26f4c6f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:12:17 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Main process exited, code=exited, status=139/n/a
Mar  1 05:12:17 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Failed with result 'exit-code'.
Mar  1 05:12:17 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.076s CPU time.
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:12:17
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'backups', 'vms', 'default.rgw.log', '.mgr', 'volumes', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', '.nfs', 'default.rgw.control']
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:12:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:12:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:17.902 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:12:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:12:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:18.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v895: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:12:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:12:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:19.904 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.131 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.133 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.133 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.133 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:12:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:12:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2837 syncs, 3.86 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2214 writes, 6930 keys, 2214 commit groups, 1.0 writes per commit group, ingest: 6.65 MB, 0.01 MB/s#012Interval WAL: 2214 writes, 961 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Mar  1 05:12:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:20.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.862 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:20 np0005634532 nova_compute[257049]: 2026-03-01 10:12:20.864 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:12:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v896: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [WARNING] 059/101221 (4) : Server backend/nfs.cephfs.2 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
Mar  1 05:12:21 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw[99156]: [ALERT] 059/101221 (4) : backend 'backend' has no server available!
Mar  1 05:12:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:12:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:21.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:12:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:22.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v897: 353 pgs: 353 active+clean; 41 MiB data, 259 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 op/s
Mar  1 05:12:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:23.885 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:12:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:23.886 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:12:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:23.886 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:12:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:23.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.40272471 +0000 UTC m=+0.057018587 container create 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:24 np0005634532 systemd[1]: Started libpod-conmon-5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5.scope.
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.380409929 +0000 UTC m=+0.034703827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.497438256 +0000 UTC m=+0.151732213 container init 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.505428643 +0000 UTC m=+0.159722500 container start 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.509027292 +0000 UTC m=+0.163321189 container attach 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:12:24 np0005634532 trusting_euler[271212]: 167 167
Mar  1 05:12:24 np0005634532 systemd[1]: libpod-5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5.scope: Deactivated successfully.
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.513087802 +0000 UTC m=+0.167381659 container died 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1ee0bc7eff7aca8096a64b98875f4517b491f217ef6712a5c2d883a860013253-merged.mount: Deactivated successfully.
Mar  1 05:12:24 np0005634532 podman[271195]: 2026-03-01 10:12:24.550201958 +0000 UTC m=+0.204495855 container remove 5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=trusting_euler, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:12:24 np0005634532 systemd[1]: libpod-conmon-5b539ad59cf4efe79ad5358bcad504877a1fdd2c5aa031c2f4e0e2dd58fa4ec5.scope: Deactivated successfully.
Mar  1 05:12:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:24.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:24 np0005634532 podman[271236]: 2026-03-01 10:12:24.662933308 +0000 UTC m=+0.039802022 container create b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 05:12:24 np0005634532 systemd[1]: Started libpod-conmon-b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c.scope.
Mar  1 05:12:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf62e31782c155c5b6bc09360d9fdf73bf5a391e921fb43a03c2f2354ad7eb4c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf62e31782c155c5b6bc09360d9fdf73bf5a391e921fb43a03c2f2354ad7eb4c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf62e31782c155c5b6bc09360d9fdf73bf5a391e921fb43a03c2f2354ad7eb4c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bf62e31782c155c5b6bc09360d9fdf73bf5a391e921fb43a03c2f2354ad7eb4c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:24 np0005634532 podman[271236]: 2026-03-01 10:12:24.646987865 +0000 UTC m=+0.023856609 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:24 np0005634532 podman[271236]: 2026-03-01 10:12:24.754245501 +0000 UTC m=+0.131114225 container init b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:12:24 np0005634532 podman[271236]: 2026-03-01 10:12:24.758875305 +0000 UTC m=+0.135744029 container start b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 05:12:24 np0005634532 podman[271236]: 2026-03-01 10:12:24.77407621 +0000 UTC m=+0.150944924 container attach b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:12:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v898: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:12:25 np0005634532 musing_goodall[271252]: [
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:    {
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "available": false,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "being_replaced": false,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "ceph_device_lvm": false,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "device_id": "QEMU_DVD-ROM_QM00001",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "lsm_data": {},
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "lvs": [],
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "path": "/dev/sr0",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "rejected_reasons": [
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "Insufficient space (<5GB)",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "Has a FileSystem"
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        ],
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        "sys_api": {
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "actuators": null,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "device_nodes": [
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:                "sr0"
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            ],
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "devname": "sr0",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "human_readable_size": "482.00 KB",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "id_bus": "ata",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "model": "QEMU DVD-ROM",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "nr_requests": "2",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "parent": "/dev/sr0",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "partitions": {},
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "path": "/dev/sr0",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "removable": "1",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "rev": "2.5+",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "ro": "0",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "rotational": "1",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "sas_address": "",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "sas_device_handle": "",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "scheduler_mode": "mq-deadline",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "sectors": 0,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "sectorsize": "2048",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "size": 493568.0,
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "support_discard": "2048",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "type": "disk",
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:            "vendor": "QEMU"
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:        }
Mar  1 05:12:25 np0005634532 musing_goodall[271252]:    }
Mar  1 05:12:25 np0005634532 musing_goodall[271252]: ]
Mar  1 05:12:25 np0005634532 systemd[1]: libpod-b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c.scope: Deactivated successfully.
Mar  1 05:12:25 np0005634532 podman[271236]: 2026-03-01 10:12:25.535214203 +0000 UTC m=+0.912082957 container died b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:25 np0005634532 systemd[1]: var-lib-containers-storage-overlay-bf62e31782c155c5b6bc09360d9fdf73bf5a391e921fb43a03c2f2354ad7eb4c-merged.mount: Deactivated successfully.
Mar  1 05:12:25 np0005634532 podman[271236]: 2026-03-01 10:12:25.578892211 +0000 UTC m=+0.955760945 container remove b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=musing_goodall, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:12:25 np0005634532 systemd[1]: libpod-conmon-b5ae826302beb073deeca5e68545104b90e553d5b886e7b936e8166cff3e7b1c.scope: Deactivated successfully.
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:12:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:12:25 np0005634532 nova_compute[257049]: 2026-03-01 10:12:25.864 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:25 np0005634532 nova_compute[257049]: 2026-03-01 10:12:25.867 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:12:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:25.910 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.31386755 +0000 UTC m=+0.104391896 container create 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.246624561 +0000 UTC m=+0.037148957 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:26 np0005634532 systemd[1]: Started libpod-conmon-2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa.scope.
Mar  1 05:12:26 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.609486992 +0000 UTC m=+0.400011328 container init 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.613968502 +0000 UTC m=+0.404492818 container start 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:26 np0005634532 agitated_proskuriakova[272644]: 167 167
Mar  1 05:12:26 np0005634532 systemd[1]: libpod-2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa.scope: Deactivated successfully.
Mar  1 05:12:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:26.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.721074614 +0000 UTC m=+0.511598950 container attach 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.721845923 +0000 UTC m=+0.512370239 container died 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:12:26 np0005634532 systemd[1]: var-lib-containers-storage-overlay-408e5fdad197699136110735054e20048509b7a49d156c27ad12a94e530e89d5-merged.mount: Deactivated successfully.
Mar  1 05:12:26 np0005634532 podman[272627]: 2026-03-01 10:12:26.762044975 +0000 UTC m=+0.552569281 container remove 2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:12:26 np0005634532 systemd[1]: libpod-conmon-2a49fe2650575d622a87dce826b21faa2c72ac3b103f160479694e312fcc33fa.scope: Deactivated successfully.
Mar  1 05:12:26 np0005634532 podman[272669]: 2026-03-01 10:12:26.89845776 +0000 UTC m=+0.048367404 container create 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 05:12:26 np0005634532 systemd[1]: Started libpod-conmon-73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b.scope.
Mar  1 05:12:26 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:26 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:26 np0005634532 podman[272669]: 2026-03-01 10:12:26.876516078 +0000 UTC m=+0.026425722 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:26 np0005634532 podman[272669]: 2026-03-01 10:12:26.987370583 +0000 UTC m=+0.137280237 container init 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:12:27 np0005634532 podman[272669]: 2026-03-01 10:12:27.000153638 +0000 UTC m=+0.150063272 container start 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:12:27 np0005634532 podman[272669]: 2026-03-01 10:12:27.00389659 +0000 UTC m=+0.153806214 container attach 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:27] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:12:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:27] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:12:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v899: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:27.246Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:27.246Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:27.247Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:12:27 np0005634532 keen_driscoll[272685]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:12:27 np0005634532 keen_driscoll[272685]: --> All data devices are unavailable
Mar  1 05:12:27 np0005634532 systemd[1]: libpod-73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b.scope: Deactivated successfully.
Mar  1 05:12:27 np0005634532 podman[272669]: 2026-03-01 10:12:27.342123803 +0000 UTC m=+0.492033427 container died 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-0e22d41f693dbf8ad15da79ab8770cbee95f09977e2c313ce693aaee7b03c948-merged.mount: Deactivated successfully.
Mar  1 05:12:27 np0005634532 podman[272669]: 2026-03-01 10:12:27.389278006 +0000 UTC m=+0.539187640 container remove 73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid)
Mar  1 05:12:27 np0005634532 systemd[1]: libpod-conmon-73e2aed0e58f416f89cdb07ccdb476e1504c8ced81df529b6c7ceed8c6d0bf4b.scope: Deactivated successfully.
Mar  1 05:12:27 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Scheduled restart job, restart counter is at 13.
Mar  1 05:12:27 np0005634532 systemd[1]: Stopped Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:12:27 np0005634532 systemd[1]: ceph-437b1e74-f995-5d64-af1d-257ce01d77ab@nfs.cephfs.2.0.compute-0.ljexyw.service: Consumed 1.076s CPU time.
Mar  1 05:12:27 np0005634532 systemd[1]: Starting Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab...
Mar  1 05:12:27 np0005634532 podman[272809]: 2026-03-01 10:12:27.729813216 +0000 UTC m=+0.037233399 container create 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:12:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040afd4ec83027a7be8ba9ea4767df698d2f7d9369a9e3907bf3ed5b6447c7ec/merged/etc/ganesha supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040afd4ec83027a7be8ba9ea4767df698d2f7d9369a9e3907bf3ed5b6447c7ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040afd4ec83027a7be8ba9ea4767df698d2f7d9369a9e3907bf3ed5b6447c7ec/merged/etc/ceph/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/040afd4ec83027a7be8ba9ea4767df698d2f7d9369a9e3907bf3ed5b6447c7ec/merged/var/lib/ceph/radosgw/ceph-nfs.cephfs.2.0.compute-0.ljexyw-rgw/keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:27 np0005634532 podman[272809]: 2026-03-01 10:12:27.791311383 +0000 UTC m=+0.098731586 container init 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:12:27 np0005634532 podman[272809]: 2026-03-01 10:12:27.79727959 +0000 UTC m=+0.104699783 container start 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=squid)
Mar  1 05:12:27 np0005634532 bash[272809]: 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf
Mar  1 05:12:27 np0005634532 podman[272809]: 2026-03-01 10:12:27.712074629 +0000 UTC m=+0.019494832 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:27 np0005634532 systemd[1]: Started Ceph nfs.cephfs.2.0.compute-0.ljexyw for 437b1e74-f995-5d64-af1d-257ce01d77ab.
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] init_logging :LOG :NULL :LOG: Setting log level for all components to NIV_EVENT
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] main :MAIN :EVENT :ganesha.nfsd Starting: Ganesha Version 5.9
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_set_param_from_conf :NFS STARTUP :EVENT :Configuration file successfully parsed
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] monitoring_init :NFS STARTUP :EVENT :Init monitoring at 0.0.0.0:9587
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] fsal_init_fds_limit :MDCACHE LRU :EVENT :Setting the system-imposed limit on FDs to 1048576.
Mar  1 05:12:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:27 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :Initializing ID Mapper.
Mar  1 05:12:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:27.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:27 np0005634532 podman[272892]: 2026-03-01 10:12:27.929383569 +0000 UTC m=+0.036281876 container create c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:12:27 np0005634532 systemd[1]: Started libpod-conmon-c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d.scope.
Mar  1 05:12:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:27 np0005634532 podman[272892]: 2026-03-01 10:12:27.993708046 +0000 UTC m=+0.100606353 container init c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 05:12:28 np0005634532 podman[272892]: 2026-03-01 10:12:27.999909098 +0000 UTC m=+0.106807405 container start c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:12:28 np0005634532 podman[272892]: 2026-03-01 10:12:28.002854761 +0000 UTC m=+0.109753068 container attach c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:12:28 np0005634532 systemd[1]: libpod-c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d.scope: Deactivated successfully.
Mar  1 05:12:28 np0005634532 awesome_elion[272908]: 167 167
Mar  1 05:12:28 np0005634532 conmon[272908]: conmon c77962de87e7686943b2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d.scope/container/memory.events
Mar  1 05:12:28 np0005634532 podman[272892]: 2026-03-01 10:12:28.005165748 +0000 UTC m=+0.112064055 container died c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:12:28 np0005634532 podman[272892]: 2026-03-01 10:12:27.911540969 +0000 UTC m=+0.018439296 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-80952ac00e51c31849508ef1aafe72110abc7e152bf9e386886df92394f2575b-merged.mount: Deactivated successfully.
Mar  1 05:12:28 np0005634532 podman[272892]: 2026-03-01 10:12:28.032443881 +0000 UTC m=+0.139342188 container remove c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_elion, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:12:28 np0005634532 systemd[1]: libpod-conmon-c77962de87e7686943b241ef87b7bf24c5ffc633f4e6acca45287dc2124c735d.scope: Deactivated successfully.
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.150417101 +0000 UTC m=+0.037274551 container create a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:28 np0005634532 systemd[1]: Started libpod-conmon-a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f.scope.
Mar  1 05:12:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91931e84b9c260edc6b0e677c18e2fa0d845a07d2400b1bd109758d81ffbde1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91931e84b9c260edc6b0e677c18e2fa0d845a07d2400b1bd109758d81ffbde1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91931e84b9c260edc6b0e677c18e2fa0d845a07d2400b1bd109758d81ffbde1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:28 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c91931e84b9c260edc6b0e677c18e2fa0d845a07d2400b1bd109758d81ffbde1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.205339905 +0000 UTC m=+0.092197385 container init a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.210657707 +0000 UTC m=+0.097515157 container start a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.213800584 +0000 UTC m=+0.100658064 container attach a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.135879412 +0000 UTC m=+0.022736892 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]: {
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:    "0": [
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:        {
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "devices": [
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "/dev/loop3"
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            ],
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "lv_name": "ceph_lv0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "lv_size": "21470642176",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "name": "ceph_lv0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "tags": {
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.cluster_name": "ceph",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.crush_device_class": "",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.encrypted": "0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.osd_id": "0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.type": "block",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.vdo": "0",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:                "ceph.with_tpm": "0"
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            },
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "type": "block",
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:            "vg_name": "ceph_vg0"
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:        }
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]:    ]
Mar  1 05:12:28 np0005634532 jolly_swartz[272949]: }
Mar  1 05:12:28 np0005634532 systemd[1]: libpod-a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f.scope: Deactivated successfully.
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.475698535 +0000 UTC m=+0.362555985 container died a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c91931e84b9c260edc6b0e677c18e2fa0d845a07d2400b1bd109758d81ffbde1-merged.mount: Deactivated successfully.
Mar  1 05:12:28 np0005634532 podman[272932]: 2026-03-01 10:12:28.559225115 +0000 UTC m=+0.446082565 container remove a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_swartz, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:12:28 np0005634532 systemd[1]: libpod-conmon-a3a526c12d9fcdbe9359d638319433366037d1b778f2494ed455ae35be0a756f.scope: Deactivated successfully.
Mar  1 05:12:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:28.912 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] init_server_pkgs :NFS STARTUP :EVENT :ID Mapper successfully initialized.
Mar  1 05:12:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.085539666 +0000 UTC m=+0.042761736 container create fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:29 np0005634532 systemd[1]: Started libpod-conmon-fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188.scope.
Mar  1 05:12:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v900: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:12:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.158822564 +0000 UTC m=+0.116044664 container init fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.064695052 +0000 UTC m=+0.021917152 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.165385566 +0000 UTC m=+0.122607686 container start fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:12:29 np0005634532 great_nightingale[273092]: 167 167
Mar  1 05:12:29 np0005634532 systemd[1]: libpod-fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188.scope: Deactivated successfully.
Mar  1 05:12:29 np0005634532 conmon[273092]: conmon fc95e89605c7302fecb5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188.scope/container/memory.events
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.169247991 +0000 UTC m=+0.126470091 container attach fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.170182444 +0000 UTC m=+0.127404524 container died fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a01984c95cc0ac57e72bc4eacef2fc4a2a2eca1d8f8ec95d5c3c688c7aaf2e41-merged.mount: Deactivated successfully.
Mar  1 05:12:29 np0005634532 podman[273076]: 2026-03-01 10:12:29.200195914 +0000 UTC m=+0.157417994 container remove fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=great_nightingale, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 05:12:29 np0005634532 systemd[1]: libpod-conmon-fc95e89605c7302fecb506720b897fbac07d8c0fe66f6a0734bae3c215771188.scope: Deactivated successfully.
Mar  1 05:12:29 np0005634532 podman[273116]: 2026-03-01 10:12:29.532280926 +0000 UTC m=+0.042974551 container create 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 05:12:29 np0005634532 systemd[1]: Started libpod-conmon-413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf.scope.
Mar  1 05:12:29 np0005634532 podman[273116]: 2026-03-01 10:12:29.514128278 +0000 UTC m=+0.024821673 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:12:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:12:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709b3a3ce1deefc357a4882c7fd380dc78782e82fb3e95d0686cf16c4d9ecefb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709b3a3ce1deefc357a4882c7fd380dc78782e82fb3e95d0686cf16c4d9ecefb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709b3a3ce1deefc357a4882c7fd380dc78782e82fb3e95d0686cf16c4d9ecefb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/709b3a3ce1deefc357a4882c7fd380dc78782e82fb3e95d0686cf16c4d9ecefb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:12:29 np0005634532 podman[273116]: 2026-03-01 10:12:29.636802274 +0000 UTC m=+0.147495709 container init 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 05:12:29 np0005634532 podman[273116]: 2026-03-01 10:12:29.642410732 +0000 UTC m=+0.153104107 container start 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2)
Mar  1 05:12:29 np0005634532 podman[273116]: 2026-03-01 10:12:29.646509553 +0000 UTC m=+0.157202978 container attach 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 05:12:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:29.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:30 np0005634532 lvm[273208]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:12:30 np0005634532 lvm[273208]: VG ceph_vg0 finished
Mar  1 05:12:30 np0005634532 suspicious_tesla[273132]: {}
Mar  1 05:12:30 np0005634532 systemd[1]: libpod-413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf.scope: Deactivated successfully.
Mar  1 05:12:30 np0005634532 systemd[1]: libpod-413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf.scope: Consumed 1.372s CPU time.
Mar  1 05:12:30 np0005634532 podman[273116]: 2026-03-01 10:12:30.76732742 +0000 UTC m=+1.278020795 container died 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:12:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-709b3a3ce1deefc357a4882c7fd380dc78782e82fb3e95d0686cf16c4d9ecefb-merged.mount: Deactivated successfully.
Mar  1 05:12:30 np0005634532 podman[273116]: 2026-03-01 10:12:30.800548209 +0000 UTC m=+1.311241584 container remove 413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=suspicious_tesla, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:12:30 np0005634532 systemd[1]: libpod-conmon-413bd1874cbbe17726f9a40c218d2ff38ed1de426e750372e868ab3c37c5f1cf.scope: Deactivated successfully.
Mar  1 05:12:30 np0005634532 nova_compute[257049]: 2026-03-01 10:12:30.866 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4997-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:12:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:31.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v901: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.754916) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951754951, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 2131, "num_deletes": 251, "total_data_size": 4083926, "memory_usage": 4139832, "flush_reason": "Manual Compaction"}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951771880, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 3960966, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24729, "largest_seqno": 26859, "table_properties": {"data_size": 3951666, "index_size": 5794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19660, "raw_average_key_size": 20, "raw_value_size": 3932906, "raw_average_value_size": 4046, "num_data_blocks": 256, "num_entries": 972, "num_filter_entries": 972, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359743, "oldest_key_time": 1772359743, "file_creation_time": 1772359951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 17033 microseconds, and 5861 cpu microseconds.
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.771940) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 3960966 bytes OK
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.771967) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.773482) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.773500) EVENT_LOG_v1 {"time_micros": 1772359951773494, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.773517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 4075286, prev total WAL file size 4111815, number of live WAL files 2.
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.774107) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(3868KB)], [56(12MB)]
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951774137, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 16694695, "oldest_snapshot_seqno": -1}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 5815 keys, 14573773 bytes, temperature: kUnknown
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951828780, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 14573773, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14534331, "index_size": 23780, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14597, "raw_key_size": 147877, "raw_average_key_size": 25, "raw_value_size": 14428833, "raw_average_value_size": 2481, "num_data_blocks": 971, "num_entries": 5815, "num_filter_entries": 5815, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772359951, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.828975) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 14573773 bytes
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.830118) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 305.2 rd, 266.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.8, 12.1 +0.0 blob) out(13.9 +0.0 blob), read-write-amplify(7.9) write-amplify(3.7) OK, records in: 6331, records dropped: 516 output_compression: NoCompression
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.830134) EVENT_LOG_v1 {"time_micros": 1772359951830127, "job": 30, "event": "compaction_finished", "compaction_time_micros": 54700, "compaction_time_cpu_micros": 21702, "output_level": 6, "num_output_files": 1, "total_output_size": 14573773, "num_input_records": 6331, "num_output_records": 5815, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951830488, "job": 30, "event": "table_file_deletion", "file_number": 58}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772359951831624, "job": 30, "event": "table_file_deletion", "file_number": 56}
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.774043) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.831848) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.831854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.831856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.831858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:12:31.831860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:12:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:31.915 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:12:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:12:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:12:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:12:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:33.089 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:12:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v902: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:12:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:33.917 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:35.091 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v903: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Mar  1 05:12:35 np0005634532 nova_compute[257049]: 2026-03-01 10:12:35.868 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:35 np0005634532 nova_compute[257049]: 2026-03-01 10:12:35.870 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:35.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:36 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:37] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:12:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:37] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:12:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:37.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v904: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Mar  1 05:12:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:37.248Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:12:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:37.921 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:39.097 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v905: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Mar  1 05:12:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:39.923 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:39 np0005634532 nova_compute[257049]: 2026-03-01 10:12:39.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:39 np0005634532 nova_compute[257049]: 2026-03-01 10:12:39.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:12:40 np0005634532 nova_compute[257049]: 2026-03-01 10:12:40.870 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:41 np0005634532 ovn_controller[157082]: 2026-03-01T10:12:41Z|00055|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Mar  1 05:12:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:41.100 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v906: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Mar  1 05:12:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:41.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:41 np0005634532 nova_compute[257049]: 2026-03-01 10:12:41.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:42 np0005634532 podman[273288]: 2026-03-01 10:12:42.431161621 +0000 UTC m=+0.116941825 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Mar  1 05:12:42 np0005634532 nova_compute[257049]: 2026-03-01 10:12:42.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:43.103 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v907: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Mar  1 05:12:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:12:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:43.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:12:43 np0005634532 nova_compute[257049]: 2026-03-01 10:12:43.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:43 np0005634532 nova_compute[257049]: 2026-03-01 10:12:43.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.999 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.999 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:12:44 np0005634532 nova_compute[257049]: 2026-03-01 10:12:44.999 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:12:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:45.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v908: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.0 MiB/s wr, 120 op/s
Mar  1 05:12:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:12:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4121541500' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.444 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.580 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.582 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4520MB free_disk=59.96738052368164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.582 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.583 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.642 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.642 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.656 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:12:45 np0005634532 nova_compute[257049]: 2026-03-01 10:12:45.873 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:45.928 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:12:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2558181271' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:12:46 np0005634532 nova_compute[257049]: 2026-03-01 10:12:46.070 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.414s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:12:46 np0005634532 nova_compute[257049]: 2026-03-01 10:12:46.074 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:12:46 np0005634532 nova_compute[257049]: 2026-03-01 10:12:46.091 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:12:46 np0005634532 nova_compute[257049]: 2026-03-01 10:12:46.094 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:12:46 np0005634532 nova_compute[257049]: 2026-03-01 10:12:46.094 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.512s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:12:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:47] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:47] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:12:47 np0005634532 nova_compute[257049]: 2026-03-01 10:12:47.090 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:47 np0005634532 nova_compute[257049]: 2026-03-01 10:12:47.091 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:47.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v909: 353 pgs: 353 active+clean; 109 MiB data, 309 MiB used, 60 GiB / 60 GiB avail; 288 KiB/s rd, 2.0 MiB/s wr, 45 op/s
Mar  1 05:12:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:47.249Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:12:47 np0005634532 podman[273362]: 2026-03-01 10:12:47.349650476 +0000 UTC m=+0.041138937 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Mar  1 05:12:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:12:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:12:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:12:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:47.930 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:47 np0005634532 nova_compute[257049]: 2026-03-01 10:12:47.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:47 np0005634532 nova_compute[257049]: 2026-03-01 10:12:47.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:12:47 np0005634532 nova_compute[257049]: 2026-03-01 10:12:47.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:12:48 np0005634532 nova_compute[257049]: 2026-03-01 10:12:48.146 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:12:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:49.109 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v910: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 374 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Mar  1 05:12:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:49.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:50 np0005634532 nova_compute[257049]: 2026-03-01 10:12:50.875 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:12:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:51.113 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v911: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Mar  1 05:12:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:51.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:52 np0005634532 nova_compute[257049]: 2026-03-01 10:12:52.143 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:12:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:53.115 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v912: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 373 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Mar  1 05:12:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:53.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:12:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:55.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:12:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v913: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 377 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Mar  1 05:12:55 np0005634532 nova_compute[257049]: 2026-03-01 10:12:55.876 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:55.938 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:12:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:57] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Mar  1 05:12:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:12:57] "GET /metrics HTTP/1.1" 200 48482 "" "Prometheus/2.51.0"
Mar  1 05:12:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:57.121 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v914: 353 pgs: 353 active+clean; 121 MiB data, 310 MiB used, 60 GiB / 60 GiB avail; 89 KiB/s rd, 107 KiB/s wr, 20 op/s
Mar  1 05:12:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:12:57.250Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:12:57 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:57.606 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:12:57 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:12:57.607 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:12:57 np0005634532 nova_compute[257049]: 2026-03-01 10:12:57.606 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:12:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:57.940 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:12:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212859896' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:12:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:12:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2212859896' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:12:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:12:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:12:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:12:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:12:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:12:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:12:59.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:12:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v915: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 109 KiB/s rd, 114 KiB/s wr, 48 op/s
Mar  1 05:12:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:12:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:12:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:12:59.942 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:00 np0005634532 nova_compute[257049]: 2026-03-01 10:13:00.914 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:01.127 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v916: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Mar  1 05:13:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:01.944 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:13:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:13:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:03.130 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v917: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 19 KiB/s wr, 29 op/s
Mar  1 05:13:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:03.946 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:05.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v918: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 19 KiB/s wr, 29 op/s
Mar  1 05:13:05 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:05.609 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.916 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.918 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.918 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5003 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.919 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:13:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:05.949 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.962 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:05 np0005634532 nova_compute[257049]: 2026-03-01 10:13:05.963 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:13:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:07] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:13:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:07] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:13:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:07.136 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v919: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Mar  1 05:13:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:07.251Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:13:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:07.251Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:13:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:07.951 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:09.139 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v920: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 6.5 KiB/s wr, 28 op/s
Mar  1 05:13:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:09.953 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:10.964 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4998-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:10.966 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:10.966 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: idle 5002 ms, sending inactivity probe run /usr/lib64/python3.9/site-packages/ovs/reconnect.py:117#033[00m
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:10.966 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering IDLE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:11.005 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:11 np0005634532 nova_compute[257049]: 2026-03-01 10:13:11.005 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Mar  1 05:13:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:11.141 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v921: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:11.955 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:13.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v922: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:13 np0005634532 podman[273440]: 2026-03-01 10:13:13.417817726 +0000 UTC m=+0.098467173 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Mar  1 05:13:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:13.957 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:15.147 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v923: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:13:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:15.959 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:16 np0005634532 nova_compute[257049]: 2026-03-01 10:13:16.007 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 4999-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:17] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:17] "GET /metrics HTTP/1.1" 200 48461 "" "Prometheus/2.51.0"
Mar  1 05:13:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:17.149 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v924: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:17.252Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:13:17
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.nfs', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'volumes', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'default.rgw.control']
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:13:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:13:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:17 np0005634532 podman[273495]: 2026-03-01 10:13:17.827672537 +0000 UTC m=+0.073496667 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent)
Mar  1 05:13:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:17.961 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:13:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:13:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:19.153 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v925: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:13:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:13:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:19.963 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:21 np0005634532 nova_compute[257049]: 2026-03-01 10:13:21.009 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v926: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:21.156 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:21.965 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:22 np0005634532 systemd[1]: virtsecretd.service: Deactivated successfully.
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.886 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.887 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.904 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.969 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.969 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.976 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Mar  1 05:13:22 np0005634532 nova_compute[257049]: 2026-03-01 10:13:22.977 257053 INFO nova.compute.claims [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Claim successful on node compute-0.ctlplane.example.com#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.069 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v927: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:23.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:23 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:13:23 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2675681201' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.564 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.570 257053 DEBUG nova.compute.provider_tree [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.598 257053 DEBUG nova.scheduler.client.report [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.644 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.645 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.750 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.750 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.770 257053 INFO nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.790 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.870 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.872 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.873 257053 INFO nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Creating image(s)#033[00m
Mar  1 05:13:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:23.886 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:23.887 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:23.887 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.913 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.952 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:23.967 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.990 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:23 np0005634532 nova_compute[257049]: 2026-03-01 10:13:23.996 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.057 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.058 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "d41046c43044bf8997bc5f9ade85627ba841861d" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.058 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.058 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "d41046c43044bf8997bc5f9ade85627ba841861d" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.085 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.089 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.104 257053 DEBUG nova.policy [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '054b4e3fa290475c906614f7e45d128f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.362 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/d41046c43044bf8997bc5f9ade85627ba841861d baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.464 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] resizing rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.585 257053 DEBUG nova.objects.instance [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'migration_context' on Instance uuid baa5d1fc-2fe6-4353-9321-71ddf8760c24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.601 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.602 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Ensure instance console log exists: /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.602 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.603 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.603 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:24 np0005634532 nova_compute[257049]: 2026-03-01 10:13:24.766 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Successfully created port: 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Mar  1 05:13:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v928: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:13:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:25.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.617 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Successfully updated port: 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.633 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.633 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.633 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.710 257053 DEBUG nova.compute.manager [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.711 257053 DEBUG nova.compute.manager [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing instance network info cache due to event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.711 257053 DEBUG oslo_concurrency.lockutils [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:13:25 np0005634532 nova_compute[257049]: 2026-03-01 10:13:25.770 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Mar  1 05:13:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:25.969 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.010 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.501 257053 DEBUG nova.network.neutron [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.524 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.524 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Instance network_info: |[{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.525 257053 DEBUG oslo_concurrency.lockutils [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.525 257053 DEBUG nova.network.neutron [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.528 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Start _get_guest_xml network_info=[{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'size': 0, 'boot_index': 0, 'guest_format': None, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encrypted': False, 'encryption_format': None, 'image_id': '07f64171-cfd1-4482-a545-07063cf7c3f2'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.532 257053 WARNING nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.537 257053 DEBUG nova.virt.libvirt.host [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.538 257053 DEBUG nova.virt.libvirt.host [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.545 257053 DEBUG nova.virt.libvirt.host [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.546 257053 DEBUG nova.virt.libvirt.host [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.546 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.547 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-03-01T10:04:35Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='47cd4c38-4c43-414c-bd62-23cc1dc66486',id=1,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-03-01T10:04:37Z,direct_url=<?>,disk_format='qcow2',id=07f64171-cfd1-4482-a545-07063cf7c3f2,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='4d09211c005246538db05e74184b7e61',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-03-01T10:04:40Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.547 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.547 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.548 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.548 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.548 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.548 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.549 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.549 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.549 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.549 257053 DEBUG nova.virt.hardware [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Mar  1 05:13:26 np0005634532 nova_compute[257049]: 2026-03-01 10:13:26.552 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:26 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:13:26 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047415564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.013 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.042 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.045 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:13:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:13:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v929: 353 pgs: 353 active+clean; 41 MiB data, 268 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:13:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:27.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:13:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:27.253Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:13:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:27.253Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.399 257053 DEBUG nova.network.neutron [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated VIF entry in instance network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.400 257053 DEBUG nova.network.neutron [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.415 257053 DEBUG oslo_concurrency.lockutils [req-940f5beb-af65-4986-a0ac-492ffd7626ba req-f8c3c214-c69c-4e66-914e-9e66bd2bddfa 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:13:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0)
Mar  1 05:13:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/321747114' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.503 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.504 257053 DEBUG nova.virt.libvirt.vif [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-931656500',display_name='tempest-TestNetworkBasicOps-server-931656500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-931656500',id=11,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKcvsPe/bt7wQGCO3x2FGW9xBp8tDOsdxNbRfhpUGAYX67H9M5t4jXrMEzIEWqxq1Vp1kSYaQSgdvRX6E2zcqTGcl8mdrZndaFhbtzpxPcNDvgQoPPzNGgz+HuvTpqMgVw==',key_name='tempest-TestNetworkBasicOps-2130166709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-42fyf790',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:13:23Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=baa5d1fc-2fe6-4353-9321-71ddf8760c24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.504 257053 DEBUG nova.network.os_vif_util [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.505 257053 DEBUG nova.network.os_vif_util [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.507 257053 DEBUG nova.objects.instance [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'pci_devices' on Instance uuid baa5d1fc-2fe6-4353-9321-71ddf8760c24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.520 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] End _get_guest_xml xml=<domain type="kvm">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <uuid>baa5d1fc-2fe6-4353-9321-71ddf8760c24</uuid>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <name>instance-0000000b</name>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <memory>131072</memory>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <vcpu>1</vcpu>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <metadata>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:package version="27.5.2-0.20260220085704.5cfeecb.el9"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:name>tempest-TestNetworkBasicOps-server-931656500</nova:name>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:creationTime>2026-03-01 10:13:26</nova:creationTime>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:flavor name="m1.nano">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:memory>128</nova:memory>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:disk>1</nova:disk>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:swap>0</nova:swap>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:ephemeral>0</nova:ephemeral>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:vcpus>1</nova:vcpus>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </nova:flavor>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:owner>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:user uuid="054b4e3fa290475c906614f7e45d128f">tempest-TestNetworkBasicOps-1700707940-project-member</nova:user>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:project uuid="aa1916e2334f470ea8eeda213ef84cc5">tempest-TestNetworkBasicOps-1700707940</nova:project>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </nova:owner>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:root type="image" uuid="07f64171-cfd1-4482-a545-07063cf7c3f2"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <nova:ports>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <nova:port uuid="79c2bbef-b2db-45bd-91c7-0e64bcb15301">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        </nova:port>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </nova:ports>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </nova:instance>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </metadata>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <sysinfo type="smbios">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <system>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="manufacturer">RDO</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="product">OpenStack Compute</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="version">27.5.2-0.20260220085704.5cfeecb.el9</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="serial">baa5d1fc-2fe6-4353-9321-71ddf8760c24</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="uuid">baa5d1fc-2fe6-4353-9321-71ddf8760c24</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <entry name="family">Virtual Machine</entry>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </system>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </sysinfo>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <os>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <type arch="x86_64" machine="q35">hvm</type>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <boot dev="hd"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <smbios mode="sysinfo"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </os>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <features>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <acpi/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <apic/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <vmcoreinfo/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </features>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <clock offset="utc">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <timer name="pit" tickpolicy="delay"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <timer name="rtc" tickpolicy="catchup"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <timer name="hpet" present="no"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </clock>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <cpu mode="host-model" match="exact">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <topology sockets="1" cores="1" threads="1"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </cpu>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  <devices>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <disk type="network" device="disk">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <target dev="vda" bus="virtio"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <disk type="network" device="cdrom">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <driver type="raw" cache="none"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <source protocol="rbd" name="vms/baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.100" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.102" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <host name="192.168.122.101" port="6789"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </source>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <auth username="openstack">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:        <secret type="ceph" uuid="437b1e74-f995-5d64-af1d-257ce01d77ab"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      </auth>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <target dev="sda" bus="sata"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </disk>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <interface type="ethernet">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <mac address="fa:16:3e:cd:0b:b9"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <driver name="vhost" rx_queue_size="512"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <mtu size="1442"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <target dev="tap79c2bbef-b2"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </interface>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <serial type="pty">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <log file="/var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/console.log" append="off"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </serial>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <video>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <model type="virtio"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </video>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <input type="tablet" bus="usb"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <rng model="virtio">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <backend model="random">/dev/urandom</backend>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </rng>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="pci" model="pcie-root-port"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <controller type="usb" index="0"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    <memballoon model="virtio">
Mar  1 05:13:27 np0005634532 nova_compute[257049]:      <stats period="10"/>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:    </memballoon>
Mar  1 05:13:27 np0005634532 nova_compute[257049]:  </devices>
Mar  1 05:13:27 np0005634532 nova_compute[257049]: </domain>
Mar  1 05:13:27 np0005634532 nova_compute[257049]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.521 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Preparing to wait for external event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.521 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.521 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.522 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.522 257053 DEBUG nova.virt.libvirt.vif [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-03-01T10:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-931656500',display_name='tempest-TestNetworkBasicOps-server-931656500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-931656500',id=11,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKcvsPe/bt7wQGCO3x2FGW9xBp8tDOsdxNbRfhpUGAYX67H9M5t4jXrMEzIEWqxq1Vp1kSYaQSgdvRX6E2zcqTGcl8mdrZndaFhbtzpxPcNDvgQoPPzNGgz+HuvTpqMgVw==',key_name='tempest-TestNetworkBasicOps-2130166709',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-42fyf790',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-03-01T10:13:23Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=baa5d1fc-2fe6-4353-9321-71ddf8760c24,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.523 257053 DEBUG nova.network.os_vif_util [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.523 257053 DEBUG nova.network.os_vif_util [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.523 257053 DEBUG os_vif [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.524 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.524 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.525 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.527 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.527 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap79c2bbef-b2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.528 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap79c2bbef-b2, col_values=(('external_ids', {'iface-id': '79c2bbef-b2db-45bd-91c7-0e64bcb15301', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cd:0b:b9', 'vm-uuid': 'baa5d1fc-2fe6-4353-9321-71ddf8760c24'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.529 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:27 np0005634532 NetworkManager[49996]: <info>  [1772360007.5307] manager: (tap79c2bbef-b2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.532 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.535 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.537 257053 INFO os_vif [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2')#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.573 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.574 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.574 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] No VIF found with MAC fa:16:3e:cd:0b:b9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.574 257053 INFO nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Using config drive#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.596 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.863 257053 INFO nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Creating config drive at /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config#033[00m
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.870 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95em_0uz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:27.971 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:27 np0005634532 nova_compute[257049]: 2026-03-01 10:13:27.993 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20260220085704.5cfeecb.el9 -quiet -J -r -V config-2 /tmp/tmp95em_0uz" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.026 257053 DEBUG nova.storage.rbd_utils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] rbd image baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.031 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.200 257053 DEBUG oslo_concurrency.processutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config baa5d1fc-2fe6-4353-9321-71ddf8760c24_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.201 257053 INFO nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Deleting local config drive /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24/disk.config because it was imported into RBD.#033[00m
Mar  1 05:13:28 np0005634532 systemd[1]: Starting libvirt secret daemon...
Mar  1 05:13:28 np0005634532 systemd[1]: Started libvirt secret daemon.
Mar  1 05:13:28 np0005634532 kernel: tap79c2bbef-b2: entered promiscuous mode
Mar  1 05:13:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:28Z|00056|binding|INFO|Claiming lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 for this chassis.
Mar  1 05:13:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:28Z|00057|binding|INFO|79c2bbef-b2db-45bd-91c7-0e64bcb15301: Claiming fa:16:3e:cd:0b:b9 10.100.0.14
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.3166] manager: (tap79c2bbef-b2): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.314 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.327 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.334 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:0b:b9 10.100.0.14'], port_security=['fa:16:3e:cd:0b:b9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'baa5d1fc-2fe6-4353-9321-71ddf8760c24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75774369-d1fe-46b7-99fa-32ee72215bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c2731b0f-5ad9-4740-93af-d158d94139f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34bd372b-ef6f-498f-877c-cdd463d14459, chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=79c2bbef-b2db-45bd-91c7-0e64bcb15301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.335 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 in datapath 75774369-d1fe-46b7-99fa-32ee72215bc9 bound to our chassis#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.336 167541 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75774369-d1fe-46b7-99fa-32ee72215bc9#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.347 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[07333663-6cd7-4e79-a7a1-450f1865b41b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.348 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75774369-d1 in ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.350 262878 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75774369-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.350 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a0eb6b-2d26-4702-9e32-2f151794d300]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.352 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f3d5cdd9-10b1-4dd4-aa95-384f8d2b63f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 systemd-udevd[273873]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:13:28 np0005634532 systemd-machined[221390]: New machine qemu-4-instance-0000000b.
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.3676] device (tap79c2bbef-b2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.365 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[f13810c7-d19f-4c5f-9ea8-1f47284b15bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.3689] device (tap79c2bbef-b2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Mar  1 05:13:28 np0005634532 systemd[1]: Started Virtual Machine qemu-4-instance-0000000b.
Mar  1 05:13:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:28Z|00058|binding|INFO|Setting lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 ovn-installed in OVS
Mar  1 05:13:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:28Z|00059|binding|INFO|Setting lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 up in Southbound
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.376 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.380 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[7ff031e0-b31a-4a26-92d3-8b6b354e090a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.405 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[34437137-79e2-48d2-88cb-1c18423734c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.409 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e58398-b4d7-49bb-acbb-4813b263a23d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.4107] manager: (tap75774369-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Mar  1 05:13:28 np0005634532 systemd-udevd[273876]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.439 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d6d883-b1c5-4689-aa33-ef6b729d2463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.444 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[bfc71315-0516-4e15-89c4-ffd76283119b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.4651] device (tap75774369-d0): carrier: link connected
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.469 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[f5089d1c-c8ba-473f-ba16-3696777474a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.481 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[4a8727bf-7aae-4e1c-b472-f047d0582d57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75774369-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:23:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436505, 'reachable_time': 18155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 273905, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.497 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[4218e99f-690f-4599-be26-f502eebd2171]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:23dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 436505, 'tstamp': 436505}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 273906, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.509 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[5e0e794c-ffcc-435f-acd9-6edd46444f5f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75774369-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:23:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 22], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436505, 'reachable_time': 18155, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 273907, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.535 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0d9841-ddea-4572-aae4-42ff289bc116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.583 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecf9576-3668-4bb7-b36d-85d05cb6a181]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.584 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75774369-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.584 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.585 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75774369-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.586 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 kernel: tap75774369-d0: entered promiscuous mode
Mar  1 05:13:28 np0005634532 NetworkManager[49996]: <info>  [1772360008.5883] manager: (tap75774369-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.590 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.591 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75774369-d0, col_values=(('external_ids', {'iface-id': '959c52a4-ced6-4b50-a3b6-13250d5b46cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.592 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:28Z|00060|binding|INFO|Releasing lport 959c52a4-ced6-4b50-a3b6-13250d5b46cc from this chassis (sb_readonly=0)
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.593 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.595 167541 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.599 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.604 257053 DEBUG nova.compute.manager [req-1fb49f8e-581e-492a-9b88-b8278010f90d req-67ff7460-b8e9-4e76-9adf-9b0d240c22c4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.604 257053 DEBUG oslo_concurrency.lockutils [req-1fb49f8e-581e-492a-9b88-b8278010f90d req-67ff7460-b8e9-4e76-9adf-9b0d240c22c4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.605 257053 DEBUG oslo_concurrency.lockutils [req-1fb49f8e-581e-492a-9b88-b8278010f90d req-67ff7460-b8e9-4e76-9adf-9b0d240c22c4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.605 257053 DEBUG oslo_concurrency.lockutils [req-1fb49f8e-581e-492a-9b88-b8278010f90d req-67ff7460-b8e9-4e76-9adf-9b0d240c22c4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.605 257053 DEBUG nova.compute.manager [req-1fb49f8e-581e-492a-9b88-b8278010f90d req-67ff7460-b8e9-4e76-9adf-9b0d240c22c4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Processing event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.606 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[50b29a0e-9564-4285-9022-4947a6d24274]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.607 167541 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: global
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    log         /dev/log local0 debug
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    log-tag     haproxy-metadata-proxy-75774369-d1fe-46b7-99fa-32ee72215bc9
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    user        root
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    group       root
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    maxconn     1024
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    pidfile     /var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    daemon
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: defaults
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    log global
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    mode http
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    option httplog
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    option dontlognull
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    option http-server-close
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    option forwardfor
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    retries                 3
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    timeout http-request    30s
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    timeout connect         30s
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    timeout client          32s
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    timeout server          32s
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    timeout http-keep-alive 30s
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: listen listener
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    bind 169.254.169.254:80
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    server metadata /var/lib/neutron/metadata_proxy
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]:    http-request add-header X-OVN-Network-ID 75774369-d1fe-46b7-99fa-32ee72215bc9
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Mar  1 05:13:28 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:13:28.607 167541 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'env', 'PROCESS_TAG=haproxy-75774369-d1fe-46b7-99fa-32ee72215bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75774369-d1fe-46b7-99fa-32ee72215bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.842 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.843 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772360008.8423588, baa5d1fc-2fe6-4353-9321-71ddf8760c24 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.844 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] VM Started (Lifecycle Event)#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.848 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.850 257053 INFO nova.virt.libvirt.driver [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Instance spawned successfully.#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.851 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.868 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.876 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.880 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.880 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.881 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.882 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.882 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.883 257053 DEBUG nova.virt.libvirt.driver [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.931 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.932 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772360008.8424397, baa5d1fc-2fe6-4353-9321-71ddf8760c24 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.932 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] VM Paused (Lifecycle Event)#033[00m
Mar  1 05:13:28 np0005634532 podman[273981]: 2026-03-01 10:13:28.949910657 +0000 UTC m=+0.038403460 container create 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.971 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.974 257053 DEBUG nova.virt.driver [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] Emitting event <LifecycleEvent: 1772360008.847304, baa5d1fc-2fe6-4353-9321-71ddf8760c24 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.974 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] VM Resumed (Lifecycle Event)#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.993 257053 INFO nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Took 5.12 seconds to spawn the instance on the hypervisor.#033[00m
Mar  1 05:13:28 np0005634532 nova_compute[257049]: 2026-03-01 10:13:28.994 257053 DEBUG nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:13:28 np0005634532 systemd[1]: Started libpod-conmon-61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e.scope.
Mar  1 05:13:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:29 np0005634532 nova_compute[257049]: 2026-03-01 10:13:29.006 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:13:29 np0005634532 nova_compute[257049]: 2026-03-01 10:13:29.011 257053 DEBUG nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Mar  1 05:13:29 np0005634532 podman[273981]: 2026-03-01 10:13:28.929211555 +0000 UTC m=+0.017704368 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 05:13:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0fb06eaabb20857ae41682a3f52d1ab143452130cca866560bbfdacc398e4f1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:29 np0005634532 nova_compute[257049]: 2026-03-01 10:13:29.041 257053 INFO nova.compute.manager [None req-dccfbb97-79ac-418c-ab3a-fd3d119141c0 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Mar  1 05:13:29 np0005634532 podman[273981]: 2026-03-01 10:13:29.0435332 +0000 UTC m=+0.132026013 container init 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Mar  1 05:13:29 np0005634532 podman[273981]: 2026-03-01 10:13:29.047316073 +0000 UTC m=+0.135808876 container start 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:13:29 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [NOTICE]   (274000) : New worker (274002) forked
Mar  1 05:13:29 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [NOTICE]   (274000) : Loading success.
Mar  1 05:13:29 np0005634532 nova_compute[257049]: 2026-03-01 10:13:29.069 257053 INFO nova.compute.manager [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Took 6.12 seconds to build instance.#033[00m
Mar  1 05:13:29 np0005634532 nova_compute[257049]: 2026-03-01 10:13:29.086 257053 DEBUG oslo_concurrency.lockutils [None req-9c42ab53-1c13-44c8-ab78-d7f030473866 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v930: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Mar  1 05:13:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:29.166 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:29.973 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.701 257053 DEBUG nova.compute.manager [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.702 257053 DEBUG oslo_concurrency.lockutils [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.702 257053 DEBUG oslo_concurrency.lockutils [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.702 257053 DEBUG oslo_concurrency.lockutils [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.702 257053 DEBUG nova.compute.manager [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:13:30 np0005634532 nova_compute[257049]: 2026-03-01 10:13:30.703 257053 WARNING nova.compute.manager [req-2b3253ee-d6a3-42dd-9df8-dfab9351b046 req-99dbfbf0-9f4a-4e12-8e72-52e818370495 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.011 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v931: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Mar  1 05:13:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:31.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:31 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:31Z|00061|binding|INFO|Releasing lport 959c52a4-ced6-4b50-a3b6-13250d5b46cc from this chassis (sb_readonly=0)
Mar  1 05:13:31 np0005634532 NetworkManager[49996]: <info>  [1772360011.6939] manager: (patch-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Mar  1 05:13:31 np0005634532 NetworkManager[49996]: <info>  [1772360011.6956] manager: (patch-br-int-to-provnet-010dfd9c-68bc-4d3c-a559-2f91b6433031): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.705 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:31 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:31Z|00062|binding|INFO|Releasing lport 959c52a4-ced6-4b50-a3b6-13250d5b46cc from this chassis (sb_readonly=0)
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.710 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.719 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.971 257053 DEBUG nova.compute.manager [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.971 257053 DEBUG nova.compute.manager [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing instance network info cache due to event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.971 257053 DEBUG oslo_concurrency.lockutils [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.971 257053 DEBUG oslo_concurrency.lockutils [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:13:31 np0005634532 nova_compute[257049]: 2026-03-01 10:13:31.971 257053 DEBUG nova.network.neutron [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:13:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:31.975 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:32 np0005634532 nova_compute[257049]: 2026-03-01 10:13:32.529 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:13:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:13:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v932: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Mar  1 05:13:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:33.172 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:33 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:33 np0005634532 nova_compute[257049]: 2026-03-01 10:13:33.476 257053 DEBUG nova.network.neutron [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated VIF entry in instance network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:13:33 np0005634532 nova_compute[257049]: 2026-03-01 10:13:33.476 257053 DEBUG nova.network.neutron [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:13:33 np0005634532 nova_compute[257049]: 2026-03-01 10:13:33.496 257053 DEBUG oslo_concurrency.lockutils [req-22bc6acb-eb86-47d1-ace0-63fc06c974ad req-c3afcfb1-d11c-4dea-a68f-fae7832c6162 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:13:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:33.978 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:13:34 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:13:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v933: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 103 op/s
Mar  1 05:13:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:35.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.254162714 +0000 UTC m=+0.045537146 container create 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:13:35 np0005634532 systemd[1]: Started libpod-conmon-3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9.scope.
Mar  1 05:13:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.238952528 +0000 UTC m=+0.030326970 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.349992831 +0000 UTC m=+0.141367343 container init 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, ceph=True)
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.356791769 +0000 UTC m=+0.148166201 container start 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:13:35 np0005634532 naughty_moser[274276]: 167 167
Mar  1 05:13:35 np0005634532 systemd[1]: libpod-3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9.scope: Deactivated successfully.
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.364627433 +0000 UTC m=+0.156001935 container attach 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.3653128 +0000 UTC m=+0.156687212 container died 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Mar  1 05:13:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-cb9b77c75829c7c9fb6ae243ee8f6510366a535e84336b86408fc3fa40d75de9-merged.mount: Deactivated successfully.
Mar  1 05:13:35 np0005634532 podman[274260]: 2026-03-01 10:13:35.421028896 +0000 UTC m=+0.212403318 container remove 3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=naughty_moser, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:13:35 np0005634532 systemd[1]: libpod-conmon-3626b0c3e6f70f96c4a34a91a900481bfc43e61d367836ea807a5e0b4fe201a9.scope: Deactivated successfully.
Mar  1 05:13:35 np0005634532 podman[274299]: 2026-03-01 10:13:35.596567203 +0000 UTC m=+0.036221306 container create a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:35 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:13:35 np0005634532 systemd[1]: Started libpod-conmon-a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47.scope.
Mar  1 05:13:35 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:35 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:35 np0005634532 podman[274299]: 2026-03-01 10:13:35.582841294 +0000 UTC m=+0.022495417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:35 np0005634532 podman[274299]: 2026-03-01 10:13:35.705415702 +0000 UTC m=+0.145069805 container init a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 05:13:35 np0005634532 podman[274299]: 2026-03-01 10:13:35.712341213 +0000 UTC m=+0.151995316 container start a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:13:35 np0005634532 podman[274299]: 2026-03-01 10:13:35.717401968 +0000 UTC m=+0.157056091 container attach a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:13:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:35.980 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:36 np0005634532 nova_compute[257049]: 2026-03-01 10:13:36.013 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:36 np0005634532 goofy_gauss[274315]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:13:36 np0005634532 goofy_gauss[274315]: --> All data devices are unavailable
Mar  1 05:13:36 np0005634532 systemd[1]: libpod-a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47.scope: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274299]: 2026-03-01 10:13:36.07492767 +0000 UTC m=+0.514581773 container died a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:13:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a973e57a94ffd12439412ab50f4305a6f985c65aa8c7983a8b9adeae0b479b23-merged.mount: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274299]: 2026-03-01 10:13:36.114760974 +0000 UTC m=+0.554415077 container remove a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_gauss, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:13:36 np0005634532 systemd[1]: libpod-conmon-a19767f990154ac137b70638abf1016132da8673ef65f85e45e262fe083a4b47.scope: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.595457339 +0000 UTC m=+0.035651171 container create fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:13:36 np0005634532 systemd[1]: Started libpod-conmon-fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7.scope.
Mar  1 05:13:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.660404484 +0000 UTC m=+0.100598336 container init fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.669320934 +0000 UTC m=+0.109514806 container start fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 05:13:36 np0005634532 admiring_yalow[274454]: 167 167
Mar  1 05:13:36 np0005634532 systemd[1]: libpod-fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7.scope: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.672973934 +0000 UTC m=+0.113167786 container attach fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True)
Mar  1 05:13:36 np0005634532 conmon[274454]: conmon fe673f84da33569df871 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7.scope/container/memory.events
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.577978957 +0000 UTC m=+0.018172809 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.674133993 +0000 UTC m=+0.114327865 container died fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:13:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-49416b125a67014e9d12ee5fbdc283ca975dc808da0ec79434540db77de02090-merged.mount: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274437]: 2026-03-01 10:13:36.724157099 +0000 UTC m=+0.164350971 container remove fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=admiring_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 05:13:36 np0005634532 systemd[1]: libpod-conmon-fe673f84da33569df8716dab8725d1bf34e29a35e8d67bc833b2937973bc89e7.scope: Deactivated successfully.
Mar  1 05:13:36 np0005634532 podman[274477]: 2026-03-01 10:13:36.896982657 +0000 UTC m=+0.046359075 container create d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:13:36 np0005634532 systemd[1]: Started libpod-conmon-d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28.scope.
Mar  1 05:13:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:36 np0005634532 podman[274477]: 2026-03-01 10:13:36.875740452 +0000 UTC m=+0.025116900 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f211f1ea9104f1954dd06dfbb465f9deaa457a27875d12961d272b8f8d7aa3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f211f1ea9104f1954dd06dfbb465f9deaa457a27875d12961d272b8f8d7aa3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f211f1ea9104f1954dd06dfbb465f9deaa457a27875d12961d272b8f8d7aa3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c4f211f1ea9104f1954dd06dfbb465f9deaa457a27875d12961d272b8f8d7aa3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:37 np0005634532 podman[274477]: 2026-03-01 10:13:37.01732686 +0000 UTC m=+0.166703308 container init d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:13:37 np0005634532 podman[274477]: 2026-03-01 10:13:37.024458556 +0000 UTC m=+0.173834994 container start d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:13:37 np0005634532 podman[274477]: 2026-03-01 10:13:37.028856975 +0000 UTC m=+0.178233403 container attach d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:13:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:37] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:13:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:37] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:13:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v934: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 102 op/s
Mar  1 05:13:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:37.178 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:37.254Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]: {
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:    "0": [
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:        {
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "devices": [
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "/dev/loop3"
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            ],
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "lv_name": "ceph_lv0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "lv_size": "21470642176",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "name": "ceph_lv0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "tags": {
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.cluster_name": "ceph",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.crush_device_class": "",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.encrypted": "0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.osd_id": "0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.type": "block",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.vdo": "0",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:                "ceph.with_tpm": "0"
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            },
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "type": "block",
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:            "vg_name": "ceph_vg0"
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:        }
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]:    ]
Mar  1 05:13:37 np0005634532 sweet_varahamihira[274493]: }
Mar  1 05:13:37 np0005634532 systemd[1]: libpod-d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28.scope: Deactivated successfully.
Mar  1 05:13:37 np0005634532 podman[274477]: 2026-03-01 10:13:37.33791066 +0000 UTC m=+0.487287088 container died d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:13:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c4f211f1ea9104f1954dd06dfbb465f9deaa457a27875d12961d272b8f8d7aa3-merged.mount: Deactivated successfully.
Mar  1 05:13:37 np0005634532 podman[274477]: 2026-03-01 10:13:37.381422225 +0000 UTC m=+0.530798643 container remove d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:13:37 np0005634532 systemd[1]: libpod-conmon-d9d3f697bf40277854a0ed2b56ad1d08818123053cac9876dbb8e3b9cc5c1a28.scope: Deactivated successfully.
Mar  1 05:13:37 np0005634532 nova_compute[257049]: 2026-03-01 10:13:37.533 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:37 np0005634532 podman[274632]: 2026-03-01 10:13:37.975037929 +0000 UTC m=+0.035877697 container create c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:13:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:37.982 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:38 np0005634532 systemd[1]: Started libpod-conmon-c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496.scope.
Mar  1 05:13:38 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:38.046555586 +0000 UTC m=+0.107395374 container init c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:38.051753324 +0000 UTC m=+0.112593092 container start c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:37.960029888 +0000 UTC m=+0.020869676 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:38.054478262 +0000 UTC m=+0.115318050 container attach c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:13:38 np0005634532 reverent_diffie[274649]: 167 167
Mar  1 05:13:38 np0005634532 systemd[1]: libpod-c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496.scope: Deactivated successfully.
Mar  1 05:13:38 np0005634532 conmon[274649]: conmon c24bd314125908f98512 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496.scope/container/memory.events
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:38.056534422 +0000 UTC m=+0.117374180 container died c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:13:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fc333d26807bab92349a96ce30b8c6511069a4876a48fdf2d076b6496ccbe3b5-merged.mount: Deactivated successfully.
Mar  1 05:13:38 np0005634532 podman[274632]: 2026-03-01 10:13:38.08883369 +0000 UTC m=+0.149673458 container remove c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=reverent_diffie, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:13:38 np0005634532 systemd[1]: libpod-conmon-c24bd314125908f9851232af0c30b80d9c3e5059f80298ff74b61f8d3b8f6496.scope: Deactivated successfully.
Mar  1 05:13:38 np0005634532 podman[274673]: 2026-03-01 10:13:38.222208265 +0000 UTC m=+0.042220314 container create 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:13:38 np0005634532 systemd[1]: Started libpod-conmon-5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3.scope.
Mar  1 05:13:38 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:13:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512767842cd0d09c06aa44a3a7397839ed89f6d1c847db28bf7211edd792e660/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512767842cd0d09c06aa44a3a7397839ed89f6d1c847db28bf7211edd792e660/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512767842cd0d09c06aa44a3a7397839ed89f6d1c847db28bf7211edd792e660/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:38 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512767842cd0d09c06aa44a3a7397839ed89f6d1c847db28bf7211edd792e660/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:13:38 np0005634532 podman[274673]: 2026-03-01 10:13:38.205529283 +0000 UTC m=+0.025541362 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:13:38 np0005634532 podman[274673]: 2026-03-01 10:13:38.299413972 +0000 UTC m=+0.119426041 container init 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Mar  1 05:13:38 np0005634532 podman[274673]: 2026-03-01 10:13:38.305015441 +0000 UTC m=+0.125027490 container start 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 05:13:38 np0005634532 podman[274673]: 2026-03-01 10:13:38.308069546 +0000 UTC m=+0.128081625 container attach 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 05:13:38 np0005634532 lvm[274765]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:13:38 np0005634532 lvm[274765]: VG ceph_vg0 finished
Mar  1 05:13:38 np0005634532 elegant_panini[274689]: {}
Mar  1 05:13:38 np0005634532 systemd[1]: libpod-5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3.scope: Deactivated successfully.
Mar  1 05:13:38 np0005634532 podman[274768]: 2026-03-01 10:13:38.94496571 +0000 UTC m=+0.024160328 container died 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 05:13:38 np0005634532 systemd[1]: var-lib-containers-storage-overlay-512767842cd0d09c06aa44a3a7397839ed89f6d1c847db28bf7211edd792e660-merged.mount: Deactivated successfully.
Mar  1 05:13:38 np0005634532 podman[274768]: 2026-03-01 10:13:38.978911189 +0000 UTC m=+0.058105807 container remove 5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=elegant_panini, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Mar  1 05:13:38 np0005634532 systemd[1]: libpod-conmon-5b3697a468c7512205465e48066ede0f4a5b694392a9bd00be67cb8322b43bc3.scope: Deactivated successfully.
Mar  1 05:13:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v935: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 274 op/s
Mar  1 05:13:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:39.181 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:13:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:39.984 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:40 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:cd:0b:b9 10.100.0.14
Mar  1 05:13:40 np0005634532 ovn_controller[157082]: 2026-03-01T10:13:40Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:cd:0b:b9 10.100.0.14
Mar  1 05:13:41 np0005634532 nova_compute[257049]: 2026-03-01 10:13:41.027 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v936: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 0 B/s wr, 237 op/s
Mar  1 05:13:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:41.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:41 np0005634532 nova_compute[257049]: 2026-03-01 10:13:41.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:41 np0005634532 nova_compute[257049]: 2026-03-01 10:13:41.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:41 np0005634532 nova_compute[257049]: 2026-03-01 10:13:41.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:13:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:41.986 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:42 np0005634532 nova_compute[257049]: 2026-03-01 10:13:42.536 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v937: 353 pgs: 353 active+clean; 88 MiB data, 285 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 0 B/s wr, 237 op/s
Mar  1 05:13:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:43.187 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:43 np0005634532 nova_compute[257049]: 2026-03-01 10:13:43.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:43.988 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:44 np0005634532 podman[274817]: 2026-03-01 10:13:44.471568449 +0000 UTC m=+0.148346456 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:13:44 np0005634532 nova_compute[257049]: 2026-03-01 10:13:44.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v938: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 321 op/s
Mar  1 05:13:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:45.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:45 np0005634532 nova_compute[257049]: 2026-03-01 10:13:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:45.990 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:46 np0005634532 nova_compute[257049]: 2026-03-01 10:13:46.029 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:46 np0005634532 nova_compute[257049]: 2026-03-01 10:13:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:46 np0005634532 nova_compute[257049]: 2026-03-01 10:13:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.002 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.003 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.003 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.004 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.005 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:47] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:47] "GET /metrics HTTP/1.1" 200 48478 "" "Prometheus/2.51.0"
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v939: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 326 KiB/s rd, 3.9 MiB/s wr, 256 op/s
Mar  1 05:13:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:47.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:47.257Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:13:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:13:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1635039133' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.483 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:13:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.539 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.550 257053 DEBUG nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.550 257053 DEBUG nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:13:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.749 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.750 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4378MB free_disk=59.92213821411133GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.750 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.750 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.812 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Instance baa5d1fc-2fe6-4353-9321-71ddf8760c24 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.812 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.812 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:13:47 np0005634532 nova_compute[257049]: 2026-03-01 10:13:47.839 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:13:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:47.992 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:13:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788083747' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:13:48 np0005634532 nova_compute[257049]: 2026-03-01 10:13:48.318 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:13:48 np0005634532 nova_compute[257049]: 2026-03-01 10:13:48.324 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:13:48 np0005634532 nova_compute[257049]: 2026-03-01 10:13:48.355 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:13:48 np0005634532 podman[274891]: 2026-03-01 10:13:48.358899649 +0000 UTC m=+0.052006515 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Mar  1 05:13:48 np0005634532 nova_compute[257049]: 2026-03-01 10:13:48.382 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:13:48 np0005634532 nova_compute[257049]: 2026-03-01 10:13:48.383 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:13:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v940: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 415 KiB/s rd, 3.9 MiB/s wr, 270 op/s
Mar  1 05:13:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:13:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:49.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.379 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.379 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.379 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.380 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.555 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.555 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.556 257053 DEBUG nova.network.neutron [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Mar  1 05:13:49 np0005634532 nova_compute[257049]: 2026-03-01 10:13:49.556 257053 DEBUG nova.objects.instance [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lazy-loading 'info_cache' on Instance uuid baa5d1fc-2fe6-4353-9321-71ddf8760c24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:13:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:49.994 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:51 np0005634532 nova_compute[257049]: 2026-03-01 10:13:51.032 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v941: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Mar  1 05:13:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:51.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:51 np0005634532 nova_compute[257049]: 2026-03-01 10:13:51.225 257053 DEBUG nova.network.neutron [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:13:51 np0005634532 nova_compute[257049]: 2026-03-01 10:13:51.242 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:13:51 np0005634532 nova_compute[257049]: 2026-03-01 10:13:51.243 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Mar  1 05:13:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:51.996 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:52 np0005634532 nova_compute[257049]: 2026-03-01 10:13:52.542 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v942: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 309 KiB/s rd, 3.9 MiB/s wr, 98 op/s
Mar  1 05:13:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:53.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:53.998 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v943: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Mar  1 05:13:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:55.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:13:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:56.000 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:13:56 np0005634532 nova_compute[257049]: 2026-03-01 10:13:56.082 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:13:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:57] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Mar  1 05:13:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:13:57] "GET /metrics HTTP/1.1" 200 48479 "" "Prometheus/2.51.0"
Mar  1 05:13:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v944: 353 pgs: 353 active+clean; 167 MiB data, 331 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 26 KiB/s wr, 74 op/s
Mar  1 05:13:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:57.207 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:13:57.258Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:13:57 np0005634532 nova_compute[257049]: 2026-03-01 10:13:57.545 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:13:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:13:58.002 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:13:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:13:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756822821' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:13:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:13:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2756822821' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:13:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:13:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:13:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:13:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:13:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:13:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v945: 353 pgs: 353 active+clean; 188 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Mar  1 05:13:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:13:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:13:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:13:59.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:00.004 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:01 np0005634532 nova_compute[257049]: 2026-03-01 10:14:01.084 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v946: 353 pgs: 353 active+clean; 188 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Mar  1 05:14:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:01.213 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:02.006 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:14:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:14:02 np0005634532 nova_compute[257049]: 2026-03-01 10:14:02.548 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v947: 353 pgs: 353 active+clean; 188 MiB data, 373 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Mar  1 05:14:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:03.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:04.008 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v948: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 123 op/s
Mar  1 05:14:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:05.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:06.010 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:06 np0005634532 nova_compute[257049]: 2026-03-01 10:14:06.086 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:14:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:07] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:14:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v949: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:14:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:07.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:07.259Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:14:07 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.445 257053 INFO nova.compute.manager [None req-d6f7c4ed-16f7-42f5-9f07-867d75054947 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Get console output#033[00m
Mar  1 05:14:07 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.453 257053 INFO oslo.privsep.daemon [None req-d6f7c4ed-16f7-42f5-9f07-867d75054947 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpsxxzi2x2/privsep.sock']#033[00m
Mar  1 05:14:07 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.549 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:08.012 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:08.093 257053 INFO oslo.privsep.daemon [None req-d6f7c4ed-16f7-42f5-9f07-867d75054947 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.977 274960 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.980 274960 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.982 274960 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:07.982 274960 INFO oslo.privsep.daemon [-] privsep daemon running as pid 274960#033[00m
Mar  1 05:14:08 np0005634532 nova_compute[257049]: 2026-03-01 10:14:08.191 274960 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Mar  1 05:14:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v950: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:14:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:09.225 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:09 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:09.323 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.324 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:09 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:09.325 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.505 257053 DEBUG nova.compute.manager [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.506 257053 DEBUG nova.compute.manager [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing instance network info cache due to event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.506 257053 DEBUG oslo_concurrency.lockutils [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.507 257053 DEBUG oslo_concurrency.lockutils [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.507 257053 DEBUG nova.network.neutron [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.537 257053 DEBUG nova.compute.manager [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.537 257053 DEBUG oslo_concurrency.lockutils [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.538 257053 DEBUG oslo_concurrency.lockutils [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.539 257053 DEBUG oslo_concurrency.lockutils [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.539 257053 DEBUG nova.compute.manager [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:09 np0005634532 nova_compute[257049]: 2026-03-01 10:14:09.539 257053 WARNING nova.compute.manager [req-05ae7688-564c-4d2f-8601-8d236d30f537 req-ab6a50b8-6186-4864-a828-b029f88c7ad6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:14:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:10.014 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:10 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:10.327 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:10 np0005634532 nova_compute[257049]: 2026-03-01 10:14:10.506 257053 INFO nova.compute.manager [None req-f2072de5-e6c9-40e9-ab12-178e26da7a0a 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Get console output#033[00m
Mar  1 05:14:10 np0005634532 nova_compute[257049]: 2026-03-01 10:14:10.513 274960 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.088 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.131 257053 DEBUG nova.network.neutron [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated VIF entry in instance network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.132 257053 DEBUG nova.network.neutron [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.150 257053 DEBUG oslo_concurrency.lockutils [req-7c7b2b6c-03fa-4e69-beeb-767cd929299d req-9d2109b1-4baa-4986-ae62-4439c43ee3d5 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:14:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v951: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 106 KiB/s wr, 34 op/s
Mar  1 05:14:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:11.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.786 257053 DEBUG nova.compute.manager [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.786 257053 DEBUG oslo_concurrency.lockutils [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.787 257053 DEBUG oslo_concurrency.lockutils [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.787 257053 DEBUG oslo_concurrency.lockutils [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.787 257053 DEBUG nova.compute.manager [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:11 np0005634532 nova_compute[257049]: 2026-03-01 10:14:11.787 257053 WARNING nova.compute.manager [req-e6396687-2009-4a7b-b263-fc50a933cd1a req-19d70879-8aa9-4e95-870d-5181ffcc6b37 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:14:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:12.017 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.408 257053 DEBUG nova.compute.manager [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.408 257053 DEBUG nova.compute.manager [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing instance network info cache due to event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.409 257053 DEBUG oslo_concurrency.lockutils [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.409 257053 DEBUG oslo_concurrency.lockutils [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.409 257053 DEBUG nova.network.neutron [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.552 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.567 257053 INFO nova.compute.manager [None req-5fee7ea9-68d8-4cfc-91ca-7753688c6ef0 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Get console output#033[00m
Mar  1 05:14:12 np0005634532 nova_compute[257049]: 2026-03-01 10:14:12.572 274960 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Mar  1 05:14:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v952: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 106 KiB/s wr, 34 op/s
Mar  1 05:14:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:13.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.839 257053 DEBUG nova.network.neutron [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated VIF entry in instance network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.840 257053 DEBUG nova.network.neutron [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.853 257053 DEBUG oslo_concurrency.lockutils [req-76bdb87c-533c-425b-856f-dbffe168f84e req-cbc41884-aeb8-41f4-9c17-6ce42949f4bf 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.860 257053 DEBUG nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.861 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.861 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.861 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.861 257053 DEBUG nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.861 257053 WARNING nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.862 257053 DEBUG nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.862 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.862 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.862 257053 DEBUG oslo_concurrency.lockutils [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.862 257053 DEBUG nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:13 np0005634532 nova_compute[257049]: 2026-03-01 10:14:13.863 257053 WARNING nova.compute.manager [req-eb1b8447-68d3-42d8-8cc0-eadeba63f9d5 req-6228938f-1792-403b-84ff-6ccd94405f25 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state active and task_state None.#033[00m
Mar  1 05:14:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:14.018 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v953: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 223 KiB/s rd, 116 KiB/s wr, 35 op/s
Mar  1 05:14:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:15.235 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:15 np0005634532 podman[274969]: 2026-03-01 10:14:15.380054412 +0000 UTC m=+0.062672629 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:14:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:16.020 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:16 np0005634532 nova_compute[257049]: 2026-03-01 10:14:16.090 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:17] "GET /metrics HTTP/1.1" 200 48473 "" "Prometheus/2.51.0"
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v954: 353 pgs: 353 active+clean; 200 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 5.9 KiB/s rd, 21 KiB/s wr, 2 op/s
Mar  1 05:14:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:17.237 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:17.260Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:14:17
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'vms', 'backups', '.nfs', 'default.rgw.meta', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.control', 'volumes']
Mar  1 05:14:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:14:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:14:17 np0005634532 nova_compute[257049]: 2026-03-01 10:14:17.555 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:18.022 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015194363726639317 of space, bias 1.0, pg target 0.4558309117991795 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:14:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.325 257053 DEBUG nova.compute.manager [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.325 257053 DEBUG nova.compute.manager [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing instance network info cache due to event network-changed-79c2bbef-b2db-45bd-91c7-0e64bcb15301. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.325 257053 DEBUG oslo_concurrency.lockutils [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.326 257053 DEBUG oslo_concurrency.lockutils [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquired lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.326 257053 DEBUG nova.network.neutron [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Refreshing network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.423 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.424 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.424 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.425 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.425 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.427 257053 INFO nova.compute.manager [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Terminating instance#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.429 257053 DEBUG nova.compute.manager [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Mar  1 05:14:18 np0005634532 kernel: tap79c2bbef-b2 (unregistering): left promiscuous mode
Mar  1 05:14:18 np0005634532 NetworkManager[49996]: <info>  [1772360058.4797] device (tap79c2bbef-b2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00063|binding|INFO|Releasing lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 from this chassis (sb_readonly=0)
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.486 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00064|binding|INFO|Setting lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 down in Southbound
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00065|binding|INFO|Removing iface tap79c2bbef-b2 ovn-installed in OVS
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.498 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.500 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:0b:b9 10.100.0.14'], port_security=['fa:16:3e:cd:0b:b9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'baa5d1fc-2fe6-4353-9321-71ddf8760c24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75774369-d1fe-46b7-99fa-32ee72215bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2731b0f-5ad9-4740-93af-d158d94139f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34bd372b-ef6f-498f-877c-cdd463d14459, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=79c2bbef-b2db-45bd-91c7-0e64bcb15301) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f611def4670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.502 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 in datapath 75774369-d1fe-46b7-99fa-32ee72215bc9 unbound from our chassis#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.503 167541 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75774369-d1fe-46b7-99fa-32ee72215bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.504 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[27aba5ea-e5c2-48c6-9deb-3be06e13d909]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.505 167541 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 namespace which is not needed anymore#033[00m
Mar  1 05:14:18 np0005634532 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Mar  1 05:14:18 np0005634532 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d0000000b.scope: Consumed 13.013s CPU time.
Mar  1 05:14:18 np0005634532 systemd-machined[221390]: Machine qemu-4-instance-0000000b terminated.
Mar  1 05:14:18 np0005634532 podman[275026]: 2026-03-01 10:14:18.563035663 +0000 UTC m=+0.054902297 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [NOTICE]   (274000) : haproxy version is 2.8.14-c23fe91
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [NOTICE]   (274000) : path to executable is /usr/sbin/haproxy
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [WARNING]  (274000) : Exiting Master process...
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [WARNING]  (274000) : Exiting Master process...
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [ALERT]    (274000) : Current worker (274002) exited with code 143 (Terminated)
Mar  1 05:14:18 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[273996]: [WARNING]  (274000) : All workers exited. Exiting... (0)
Mar  1 05:14:18 np0005634532 systemd[1]: libpod-61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e.scope: Deactivated successfully.
Mar  1 05:14:18 np0005634532 podman[275068]: 2026-03-01 10:14:18.620960924 +0000 UTC m=+0.044564682 container died 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:14:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e-userdata-shm.mount: Deactivated successfully.
Mar  1 05:14:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f0fb06eaabb20857ae41682a3f52d1ab143452130cca866560bbfdacc398e4f1-merged.mount: Deactivated successfully.
Mar  1 05:14:18 np0005634532 kernel: tap79c2bbef-b2: entered promiscuous mode
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.646 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00066|binding|INFO|Claiming lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 for this chassis.
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00067|binding|INFO|79c2bbef-b2db-45bd-91c7-0e64bcb15301: Claiming fa:16:3e:cd:0b:b9 10.100.0.14
Mar  1 05:14:18 np0005634532 kernel: tap79c2bbef-b2 (unregistering): left promiscuous mode
Mar  1 05:14:18 np0005634532 podman[275068]: 2026-03-01 10:14:18.658247995 +0000 UTC m=+0.081851763 container cleanup 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:14:18 np0005634532 systemd[1]: libpod-conmon-61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e.scope: Deactivated successfully.
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00068|binding|INFO|Setting lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 ovn-installed in OVS
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00069|if_status|INFO|Not setting lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 down as sb is readonly
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.668 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.672 257053 INFO nova.virt.libvirt.driver [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Instance destroyed successfully.#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.673 257053 DEBUG nova.objects.instance [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lazy-loading 'resources' on Instance uuid baa5d1fc-2fe6-4353-9321-71ddf8760c24 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Mar  1 05:14:18 np0005634532 podman[275104]: 2026-03-01 10:14:18.721474157 +0000 UTC m=+0.045602078 container remove 61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.license=GPLv2)
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.725 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[686341a5-b8f2-4346-9bfc-31655986c117]: (4, ('Sun Mar  1 10:14:18 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 (61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e)\n61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e\nSun Mar  1 10:14:18 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 (61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e)\n61cadc6a537dd7de2243a15164e1e699117472c49ee1754920dd56ce912cbe1e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.727 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[b7ebe89d-3fd0-47e2-8b5f-e3149acc7237]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.728 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75774369-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.729 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 kernel: tap75774369-d0: left promiscuous mode
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.736 257053 DEBUG nova.virt.libvirt.vif [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-03-01T10:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-931656500',display_name='tempest-TestNetworkBasicOps-server-931656500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-931656500',id=11,image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKcvsPe/bt7wQGCO3x2FGW9xBp8tDOsdxNbRfhpUGAYX67H9M5t4jXrMEzIEWqxq1Vp1kSYaQSgdvRX6E2zcqTGcl8mdrZndaFhbtzpxPcNDvgQoPPzNGgz+HuvTpqMgVw==',key_name='tempest-TestNetworkBasicOps-2130166709',keypairs=<?>,launch_index=0,launched_at=2026-03-01T10:13:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa1916e2334f470ea8eeda213ef84cc5',ramdisk_id='',reservation_id='r-42fyf790',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='07f64171-cfd1-4482-a545-07063cf7c3f2',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-1700707940',owner_user_name='tempest-TestNetworkBasicOps-1700707940-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-03-01T10:13:29Z,user_data=None,user_id='054b4e3fa290475c906614f7e45d128f',uuid=baa5d1fc-2fe6-4353-9321-71ddf8760c24,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.737 257053 DEBUG nova.network.os_vif_util [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converting VIF {"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.178", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.738 257053 DEBUG nova.network.os_vif_util [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.738 257053 DEBUG os_vif [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.740 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.740 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[e4078294-cf52-4a0f-ab46-4d48f820bec5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.740 257053 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap79c2bbef-b2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.741 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.742 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.746 257053 INFO os_vif [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:cd:0b:b9,bridge_name='br-int',has_traffic_filtering=True,id=79c2bbef-b2db-45bd-91c7-0e64bcb15301,network=Network(75774369-d1fe-46b7-99fa-32ee72215bc9),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap79c2bbef-b2')#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.754 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[416026bd-0ec7-41d2-8f57-3ba4d178724d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.756 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[229d72cd-8506-436e-8910-a1658fb79508]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.766 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[75667e97-cbb9-4d74-9938-b64c38bc7bd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 436499, 'reachable_time': 34496, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275136, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.768 167914 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.768 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[163d89f6-5e25-4ad4-b3fa-f85fb207d841]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 systemd[1]: run-netns-ovnmeta\x2d75774369\x2dd1fe\x2d46b7\x2d99fa\x2d32ee72215bc9.mount: Deactivated successfully.
Mar  1 05:14:18 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:18Z|00070|binding|INFO|Releasing lport 79c2bbef-b2db-45bd-91c7-0e64bcb15301 from this chassis (sb_readonly=0)
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.847 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:0b:b9 10.100.0.14'], port_security=['fa:16:3e:cd:0b:b9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'baa5d1fc-2fe6-4353-9321-71ddf8760c24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75774369-d1fe-46b7-99fa-32ee72215bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2731b0f-5ad9-4740-93af-d158d94139f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34bd372b-ef6f-498f-877c-cdd463d14459, chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=79c2bbef-b2db-45bd-91c7-0e64bcb15301) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.848 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 in datapath 75774369-d1fe-46b7-99fa-32ee72215bc9 bound to our chassis#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.850 167541 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 75774369-d1fe-46b7-99fa-32ee72215bc9#033[00m
Mar  1 05:14:18 np0005634532 nova_compute[257049]: 2026-03-01 10:14:18.853 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.859 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[35637b66-8e8c-4579-8047-d21b9551841b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.860 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap75774369-d1 in ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.864 262878 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap75774369-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.865 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[c97ef3c1-7d04-46e2-8f55-daaa35da9dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.868 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[359d3785-d2a2-4fb4-8a96-923aa1afb1ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.879 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cd:0b:b9 10.100.0.14'], port_security=['fa:16:3e:cd:0b:b9 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'baa5d1fc-2fe6-4353-9321-71ddf8760c24', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-75774369-d1fe-46b7-99fa-32ee72215bc9', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa1916e2334f470ea8eeda213ef84cc5', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'c2731b0f-5ad9-4740-93af-d158d94139f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34bd372b-ef6f-498f-877c-cdd463d14459, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f611def4670>], logical_port=79c2bbef-b2db-45bd-91c7-0e64bcb15301) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7f611def4670>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.878 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[21e8efa0-2df9-4f96-9259-94acbc3541b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.900 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[89f64127-7fcc-425d-a75f-7421c078420a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.923 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[988070dc-d9b8-4fa7-aaea-403426b18b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 systemd-udevd[275037]: Network interface NamePolicy= disabled on kernel command line.
Mar  1 05:14:18 np0005634532 NetworkManager[49996]: <info>  [1772360058.9293] manager: (tap75774369-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/48)
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.929 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1329be-e548-441a-8faa-c09c7d0ffa89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.952 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[7452b790-5855-4b91-bf23-0d2f3be3acbb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.956 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[983daa66-9ae9-4a7d-90c0-575ce988aa28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 NetworkManager[49996]: <info>  [1772360058.9724] device (tap75774369-d0): carrier: link connected
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.976 262940 DEBUG oslo.privsep.daemon [-] privsep: reply[926b1d6f-343d-4909-9663-59c76848e1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:18 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:18.992 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[f2db1ce7-26ae-4046-9baf-869e00a8ddc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75774369-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:23:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441556, 'reachable_time': 44859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275166, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.005 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[579695d0-baad-4212-8a46-ea070e4c7077]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe90:23dd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 441556, 'tstamp': 441556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275167, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.020 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1061a1-e291-443a-8f11-e3352bda41fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap75774369-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:90:23:dd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441556, 'reachable_time': 44859, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275168, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.041 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[6942787d-23aa-4f3f-9d3a-f45e16babac9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.081 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[2c06ac67-a303-40cc-b432-0d93e33011ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.082 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75774369-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.082 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.083 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75774369-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.084 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 kernel: tap75774369-d0: entered promiscuous mode
Mar  1 05:14:19 np0005634532 NetworkManager[49996]: <info>  [1772360059.0863] manager: (tap75774369-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/49)
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.087 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.088 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap75774369-d0, col_values=(('external_ids', {'iface-id': '959c52a4-ced6-4b50-a3b6-13250d5b46cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:19 np0005634532 ovn_controller[157082]: 2026-03-01T10:14:19Z|00071|binding|INFO|Releasing lport 959c52a4-ced6-4b50-a3b6-13250d5b46cc from this chassis (sb_readonly=0)
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.089 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.090 167541 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.092 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[cf12e885-8392-4d14-8dd0-b2eeea82cc8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.092 167541 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: global
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    log         /dev/log local0 debug
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    log-tag     haproxy-metadata-proxy-75774369-d1fe-46b7-99fa-32ee72215bc9
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    user        root
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    group       root
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    maxconn     1024
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    pidfile     /var/lib/neutron/external/pids/75774369-d1fe-46b7-99fa-32ee72215bc9.pid.haproxy
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    daemon
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: defaults
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    log global
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    mode http
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    option httplog
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    option dontlognull
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    option http-server-close
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    option forwardfor
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    retries                 3
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    timeout http-request    30s
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    timeout connect         30s
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    timeout client          32s
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    timeout server          32s
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    timeout http-keep-alive 30s
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: listen listener
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    bind 169.254.169.254:80
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    server metadata /var/lib/neutron/metadata_proxy
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]:    http-request add-header X-OVN-Network-ID 75774369-d1fe-46b7-99fa-32ee72215bc9
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.093 167541 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'env', 'PROCESS_TAG=haproxy-75774369-d1fe-46b7-99fa-32ee72215bc9', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/75774369-d1fe-46b7-99fa-32ee72215bc9.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.094 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v955: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 22 KiB/s wr, 34 op/s
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.183 257053 INFO nova.virt.libvirt.driver [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Deleting instance files /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24_del#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.184 257053 INFO nova.virt.libvirt.driver [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Deletion of /var/lib/nova/instances/baa5d1fc-2fe6-4353-9321-71ddf8760c24_del complete#033[00m
Mar  1 05:14:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:19.240 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.243 257053 INFO nova.compute.manager [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.243 257053 DEBUG oslo.service.loopingcall [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.243 257053 DEBUG nova.compute.manager [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.244 257053 DEBUG nova.network.neutron [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.384 257053 DEBUG nova.compute.manager [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.385 257053 DEBUG oslo_concurrency.lockutils [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.385 257053 DEBUG oslo_concurrency.lockutils [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.385 257053 DEBUG oslo_concurrency.lockutils [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.386 257053 DEBUG nova.compute.manager [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.386 257053 DEBUG nova.compute.manager [req-cd177119-fcde-4cfc-afca-acc4619a1b08 req-0244b0bb-bc1c-4b0f-90e7-a898cd522ea4 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-unplugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Mar  1 05:14:19 np0005634532 podman[275198]: 2026-03-01 10:14:19.400978283 +0000 UTC m=+0.041105836 container create 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, tcib_managed=true, org.label-schema.license=GPLv2)
Mar  1 05:14:19 np0005634532 systemd[1]: Started libpod-conmon-32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f.scope.
Mar  1 05:14:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20b465f61a4f78cc9d17dd84aa7f8eeafa9b6de943e0cff2737ee4870725bd1d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:19 np0005634532 podman[275198]: 2026-03-01 10:14:19.470439859 +0000 UTC m=+0.110567412 container init 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:14:19 np0005634532 podman[275198]: 2026-03-01 10:14:19.476574571 +0000 UTC m=+0.116702124 container start 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, org.label-schema.build-date=20260223)
Mar  1 05:14:19 np0005634532 podman[275198]: 2026-03-01 10:14:19.382404194 +0000 UTC m=+0.022531767 image pull 2eca8c653984dc6e576f18f42e399ad6cc5a719b2d43d3fafd50f21f399639f3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [NOTICE]   (275218) : New worker (275220) forked
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [NOTICE]   (275218) : Loading success.
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.525 167541 INFO neutron.agent.ovn.metadata.agent [-] Port 79c2bbef-b2db-45bd-91c7-0e64bcb15301 in datapath 75774369-d1fe-46b7-99fa-32ee72215bc9 unbound from our chassis#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.526 167541 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 75774369-d1fe-46b7-99fa-32ee72215bc9, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.527 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[79cd986a-9445-43f4-95ec-05044b3a20bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.528 167541 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 namespace which is not needed anymore#033[00m
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [NOTICE]   (275218) : haproxy version is 2.8.14-c23fe91
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [NOTICE]   (275218) : path to executable is /usr/sbin/haproxy
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [WARNING]  (275218) : Exiting Master process...
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [ALERT]    (275218) : Current worker (275220) exited with code 143 (Terminated)
Mar  1 05:14:19 np0005634532 neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9[275214]: [WARNING]  (275218) : All workers exited. Exiting... (0)
Mar  1 05:14:19 np0005634532 systemd[1]: libpod-32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f.scope: Deactivated successfully.
Mar  1 05:14:19 np0005634532 podman[275246]: 2026-03-01 10:14:19.627223282 +0000 UTC m=+0.037191749 container died 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:14:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:14:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f-userdata-shm.mount: Deactivated successfully.
Mar  1 05:14:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-20b465f61a4f78cc9d17dd84aa7f8eeafa9b6de943e0cff2737ee4870725bd1d-merged.mount: Deactivated successfully.
Mar  1 05:14:19 np0005634532 podman[275246]: 2026-03-01 10:14:19.659709725 +0000 UTC m=+0.069678192 container cleanup 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0)
Mar  1 05:14:19 np0005634532 systemd[1]: libpod-conmon-32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f.scope: Deactivated successfully.
Mar  1 05:14:19 np0005634532 podman[275279]: 2026-03-01 10:14:19.708566142 +0000 UTC m=+0.034722149 container remove 32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.schema-version=1.0)
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.713 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[c1915af6-0c70-41b2-a502-877125204c08]: (4, ('Sun Mar  1 10:14:19 AM UTC 2026 Stopping container neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 (32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f)\n32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f\nSun Mar  1 10:14:19 AM UTC 2026 Deleting container neutron-haproxy-ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 (32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f)\n32dc2088beed6272c46b19698bee7809aa8a0897edce28b59d2c3db73c6a630f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.714 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[7f3ee540-ec6e-47b7-b1ec-18b67f632aba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.715 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75774369-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.754 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 kernel: tap75774369-d0: left promiscuous mode
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.755 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 nova_compute[257049]: 2026-03-01 10:14:19.760 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.760 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[581d2a0a-a0cd-43fb-8708-ccd6fc62b454]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.781 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[0a55506d-2bfc-415e-bc12-25b813a3fffe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.783 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[42fcc759-ab3d-40d1-a80c-727c74b2e5aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.796 262878 DEBUG oslo.privsep.daemon [-] privsep: reply[5cc40bad-fda4-4a6c-95ef-65cc9a4c93e9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 441551, 'reachable_time': 20623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275294, 'error': None, 'target': 'ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:19 np0005634532 systemd[1]: run-netns-ovnmeta\x2d75774369\x2dd1fe\x2d46b7\x2d99fa\x2d32ee72215bc9.mount: Deactivated successfully.
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.798 167914 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-75774369-d1fe-46b7-99fa-32ee72215bc9 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Mar  1 05:14:19 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:19.799 167914 DEBUG oslo.privsep.daemon [-] privsep: reply[bda51c8b-47bf-44f9-9fdf-b64d57994ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Mar  1 05:14:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:20.024 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.388 257053 DEBUG nova.network.neutron [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.424 257053 INFO nova.compute.manager [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Took 1.18 seconds to deallocate network for instance.#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.454 257053 DEBUG nova.compute.manager [req-8285b3e8-5d6f-48ba-9fc2-f0d469033d8e req-2570216e-8942-455b-a453-ef6b99962d8f 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-deleted-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.474 257053 DEBUG nova.network.neutron [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updated VIF entry in instance network info cache for port 79c2bbef-b2db-45bd-91c7-0e64bcb15301. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.474 257053 DEBUG nova.network.neutron [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Updating instance_info_cache with network_info: [{"id": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "address": "fa:16:3e:cd:0b:b9", "network": {"id": "75774369-d1fe-46b7-99fa-32ee72215bc9", "bridge": "br-int", "label": "tempest-network-smoke--723889176", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa1916e2334f470ea8eeda213ef84cc5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap79c2bbef-b2", "ovs_interfaceid": "79c2bbef-b2db-45bd-91c7-0e64bcb15301", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.477 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.477 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.488 257053 DEBUG oslo_concurrency.lockutils [req-8a5255ae-c522-4e79-a04d-0a55008b9e92 req-2a5ccbf7-a7ad-41b0-91a0-8bbdf4a96fc6 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Releasing lock "refresh_cache-baa5d1fc-2fe6-4353-9321-71ddf8760c24" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.521 257053 DEBUG oslo_concurrency.processutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:14:20 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:14:20 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582810772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.972 257053 DEBUG oslo_concurrency.processutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.976 257053 DEBUG nova.compute.provider_tree [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:14:20 np0005634532 nova_compute[257049]: 2026-03-01 10:14:20.993 257053 DEBUG nova.scheduler.client.report [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.024 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.547s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.054 257053 INFO nova.scheduler.client.report [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Deleted allocations for instance baa5d1fc-2fe6-4353-9321-71ddf8760c24#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.092 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.119 257053 DEBUG oslo_concurrency.lockutils [None req-43ed4c11-c1e6-4c2a-8359-52193acbc9c8 054b4e3fa290475c906614f7e45d128f aa1916e2334f470ea8eeda213ef84cc5 - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v956: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 10 KiB/s wr, 33 op/s
Mar  1 05:14:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:21.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.480 257053 DEBUG nova.compute.manager [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.481 257053 DEBUG oslo_concurrency.lockutils [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Acquiring lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.481 257053 DEBUG oslo_concurrency.lockutils [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.481 257053 DEBUG oslo_concurrency.lockutils [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] Lock "baa5d1fc-2fe6-4353-9321-71ddf8760c24-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.481 257053 DEBUG nova.compute.manager [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] No waiting events found dispatching network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Mar  1 05:14:21 np0005634532 nova_compute[257049]: 2026-03-01 10:14:21.482 257053 WARNING nova.compute.manager [req-f85b132b-cf0b-4de9-a84e-a7b9b529dffb req-72d481d8-bc5a-4a12-8679-b58000629fad 4172bfcb2ac44ccc905f3929f41a6ec0 b5baddd982154c5d8ca5431b102a7ecd - - default default] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Received unexpected event network-vif-plugged-79c2bbef-b2db-45bd-91c7-0e64bcb15301 for instance with vm_state deleted and task_state None.#033[00m
Mar  1 05:14:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:22.026 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v957: 353 pgs: 353 active+clean; 121 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 10 KiB/s wr, 33 op/s
Mar  1 05:14:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:23.246 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:23 np0005634532 nova_compute[257049]: 2026-03-01 10:14:23.743 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:23.887 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:23.888 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:14:23.888 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:24.028 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:24 np0005634532 nova_compute[257049]: 2026-03-01 10:14:24.671 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:24 np0005634532 nova_compute[257049]: 2026-03-01 10:14:24.701 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v958: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 12 KiB/s wr, 57 op/s
Mar  1 05:14:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:25.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:26.031 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:26 np0005634532 nova_compute[257049]: 2026-03-01 10:14:26.145 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:14:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:14:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v959: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 55 op/s
Mar  1 05:14:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:27.252 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:27.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:14:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:27.261Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:14:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:27.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:14:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:28.032 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:28 np0005634532 nova_compute[257049]: 2026-03-01 10:14:28.744 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v960: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Mar  1 05:14:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:29.255 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:30.034 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:31 np0005634532 nova_compute[257049]: 2026-03-01 10:14:31.147 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v961: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Mar  1 05:14:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:31.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:32.036 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:14:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:14:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v962: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Mar  1 05:14:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:33.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:33 np0005634532 nova_compute[257049]: 2026-03-01 10:14:33.672 257053 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1772360058.6681256, baa5d1fc-2fe6-4353-9321-71ddf8760c24 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Mar  1 05:14:33 np0005634532 nova_compute[257049]: 2026-03-01 10:14:33.672 257053 INFO nova.compute.manager [-] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] VM Stopped (Lifecycle Event)#033[00m
Mar  1 05:14:33 np0005634532 nova_compute[257049]: 2026-03-01 10:14:33.695 257053 DEBUG nova.compute.manager [None req-131d8286-7501-4530-9253-77b89e04b82e - - - - - -] [instance: baa5d1fc-2fe6-4353-9321-71ddf8760c24] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Mar  1 05:14:33 np0005634532 nova_compute[257049]: 2026-03-01 10:14:33.747 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:34.038 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v963: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 16 KiB/s rd, 1.2 KiB/s wr, 24 op/s
Mar  1 05:14:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:35.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:36.040 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:36 np0005634532 nova_compute[257049]: 2026-03-01 10:14:36.149 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:14:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:14:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v964: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:14:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:37.261Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:14:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:37.267 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:38.042 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:38 np0005634532 nova_compute[257049]: 2026-03-01 10:14:38.749 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v965: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:14:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:39.269 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:40.044 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:14:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:14:40 np0005634532 podman[275539]: 2026-03-01 10:14:40.994156794 +0000 UTC m=+0.057115002 container create 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:14:41 np0005634532 systemd[1]: Started libpod-conmon-19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092.scope.
Mar  1 05:14:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:41.05187783 +0000 UTC m=+0.114836028 container init 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:41.056269248 +0000 UTC m=+0.119227436 container start 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:41 np0005634532 competent_tharp[275556]: 167 167
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:41.05958618 +0000 UTC m=+0.122544488 container attach 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:41 np0005634532 systemd[1]: libpod-19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092.scope: Deactivated successfully.
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:41.060278237 +0000 UTC m=+0.123236425 container died 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:40.969225588 +0000 UTC m=+0.032183866 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9b0dc5cc216acc09b9628392fc9365d71b2ecbef3ba762565589b90efe40527d-merged.mount: Deactivated successfully.
Mar  1 05:14:41 np0005634532 podman[275539]: 2026-03-01 10:14:41.095916258 +0000 UTC m=+0.158874476 container remove 19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_tharp, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Mar  1 05:14:41 np0005634532 systemd[1]: libpod-conmon-19459e0952f5943c2de0900fb635c5bba4a06219f75301fcf36e6985b8d10092.scope: Deactivated successfully.
Mar  1 05:14:41 np0005634532 nova_compute[257049]: 2026-03-01 10:14:41.151 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v966: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.214935798 +0000 UTC m=+0.034590396 container create 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:14:41 np0005634532 systemd[1]: Started libpod-conmon-72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7.scope.
Mar  1 05:14:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:41.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.281213445 +0000 UTC m=+0.100868063 container init 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid)
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.28949342 +0000 UTC m=+0.109148038 container start 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.292926074 +0000 UTC m=+0.112580722 container attach 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.200721977 +0000 UTC m=+0.020376595 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:14:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:41 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:14:41 np0005634532 sad_hodgkin[275595]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:14:41 np0005634532 sad_hodgkin[275595]: --> All data devices are unavailable
Mar  1 05:14:41 np0005634532 systemd[1]: libpod-72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7.scope: Deactivated successfully.
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.563532118 +0000 UTC m=+0.383186726 container died 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True)
Mar  1 05:14:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-25dbca04eaa3c65a22573f1c11cb46c6510c534bc3e4a988ca56fbda19367116-merged.mount: Deactivated successfully.
Mar  1 05:14:41 np0005634532 podman[275578]: 2026-03-01 10:14:41.602482771 +0000 UTC m=+0.422137369 container remove 72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sad_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:41 np0005634532 systemd[1]: libpod-conmon-72295d6794f257faf47a4976c502b02f1418d2e3506fac1caa28783465217ea7.scope: Deactivated successfully.
Mar  1 05:14:41 np0005634532 nova_compute[257049]: 2026-03-01 10:14:41.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:41 np0005634532 nova_compute[257049]: 2026-03-01 10:14:41.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:14:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:42.046 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.095675964 +0000 UTC m=+0.031920509 container create 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid)
Mar  1 05:14:42 np0005634532 systemd[1]: Started libpod-conmon-773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397.scope.
Mar  1 05:14:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.161239964 +0000 UTC m=+0.097484519 container init 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.166071474 +0000 UTC m=+0.102316019 container start 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.169103818 +0000 UTC m=+0.105348363 container attach 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid)
Mar  1 05:14:42 np0005634532 systemd[1]: libpod-773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397.scope: Deactivated successfully.
Mar  1 05:14:42 np0005634532 affectionate_black[275733]: 167 167
Mar  1 05:14:42 np0005634532 conmon[275733]: conmon 773aa158d27686fd8a44 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397.scope/container/memory.events
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.17199177 +0000 UTC m=+0.108236335 container died 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.082452848 +0000 UTC m=+0.018697413 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-dac1b16ef045b5337742fc0be3e595a40295842318b786ef7a1f63cd077c1100-merged.mount: Deactivated successfully.
Mar  1 05:14:42 np0005634532 podman[275717]: 2026-03-01 10:14:42.217396491 +0000 UTC m=+0.153641076 container remove 773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_black, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 05:14:42 np0005634532 systemd[1]: libpod-conmon-773aa158d27686fd8a4454fb8a418a1a93e63caeb12d2ddb4156f091aea4a397.scope: Deactivated successfully.
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.353123014 +0000 UTC m=+0.033286893 container create 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:14:42 np0005634532 systemd[1]: Started libpod-conmon-26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a.scope.
Mar  1 05:14:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b84acdfbabfe85ed7eadd3e90d743fd9e05bb51979db07fc0c1ee689657677/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b84acdfbabfe85ed7eadd3e90d743fd9e05bb51979db07fc0c1ee689657677/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b84acdfbabfe85ed7eadd3e90d743fd9e05bb51979db07fc0c1ee689657677/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00b84acdfbabfe85ed7eadd3e90d743fd9e05bb51979db07fc0c1ee689657677/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.433256374 +0000 UTC m=+0.113420243 container init 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.336567485 +0000 UTC m=+0.016731374 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.442157944 +0000 UTC m=+0.122321813 container start 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.445328092 +0000 UTC m=+0.125491961 container attach 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]: {
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:    "0": [
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:        {
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "devices": [
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "/dev/loop3"
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            ],
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "lv_name": "ceph_lv0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "lv_size": "21470642176",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "name": "ceph_lv0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "tags": {
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.cluster_name": "ceph",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.crush_device_class": "",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.encrypted": "0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.osd_id": "0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.type": "block",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.vdo": "0",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:                "ceph.with_tpm": "0"
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            },
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "type": "block",
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:            "vg_name": "ceph_vg0"
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:        }
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]:    ]
Mar  1 05:14:42 np0005634532 condescending_perlman[275773]: }
Mar  1 05:14:42 np0005634532 systemd[1]: libpod-26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a.scope: Deactivated successfully.
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.737117001 +0000 UTC m=+0.417280900 container died 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-00b84acdfbabfe85ed7eadd3e90d743fd9e05bb51979db07fc0c1ee689657677-merged.mount: Deactivated successfully.
Mar  1 05:14:42 np0005634532 podman[275758]: 2026-03-01 10:14:42.780080272 +0000 UTC m=+0.460244171 container remove 26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_perlman, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:14:42 np0005634532 systemd[1]: libpod-conmon-26659609a1bf9159d97607d2bac87672430b177c9ee7c87b242cb3c4bc25c82a.scope: Deactivated successfully.
Mar  1 05:14:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v967: 353 pgs: 353 active+clean; 41 MiB data, 289 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:14:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:43.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.397619277 +0000 UTC m=+0.040850190 container create 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS)
Mar  1 05:14:43 np0005634532 systemd[1]: Started libpod-conmon-3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3.scope.
Mar  1 05:14:43 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.474281261 +0000 UTC m=+0.117512194 container init 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.380810452 +0000 UTC m=+0.024041415 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.4786794 +0000 UTC m=+0.121910333 container start 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.482041633 +0000 UTC m=+0.125272556 container attach 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid)
Mar  1 05:14:43 np0005634532 affectionate_lumiere[275903]: 167 167
Mar  1 05:14:43 np0005634532 systemd[1]: libpod-3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3.scope: Deactivated successfully.
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.485586441 +0000 UTC m=+0.128817404 container died 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:14:43 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3a5e23e5e4dfce7017d596ebd2fc97cfc5b90bc2456d77167726685234c3766a-merged.mount: Deactivated successfully.
Mar  1 05:14:43 np0005634532 podman[275886]: 2026-03-01 10:14:43.521958079 +0000 UTC m=+0.165188992 container remove 3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:43 np0005634532 systemd[1]: libpod-conmon-3e3640cb7e92b6b3a8390463dd2c9800d5127a073931a072ed58169096d694e3.scope: Deactivated successfully.
Mar  1 05:14:43 np0005634532 podman[275928]: 2026-03-01 10:14:43.646073195 +0000 UTC m=+0.036699927 container create 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:14:43 np0005634532 systemd[1]: Started libpod-conmon-153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b.scope.
Mar  1 05:14:43 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:14:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12e289dae72c2ce4c88da02588c0e3ccda0b9bdaa91e0799a8bf214e6492a1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12e289dae72c2ce4c88da02588c0e3ccda0b9bdaa91e0799a8bf214e6492a1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12e289dae72c2ce4c88da02588c0e3ccda0b9bdaa91e0799a8bf214e6492a1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:43 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b12e289dae72c2ce4c88da02588c0e3ccda0b9bdaa91e0799a8bf214e6492a1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:14:43 np0005634532 podman[275928]: 2026-03-01 10:14:43.722977655 +0000 UTC m=+0.113604417 container init 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:14:43 np0005634532 podman[275928]: 2026-03-01 10:14:43.628089121 +0000 UTC m=+0.018715863 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:14:43 np0005634532 podman[275928]: 2026-03-01 10:14:43.732423328 +0000 UTC m=+0.123050060 container start 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:14:43 np0005634532 podman[275928]: 2026-03-01 10:14:43.735461673 +0000 UTC m=+0.126088435 container attach 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:14:43 np0005634532 nova_compute[257049]: 2026-03-01 10:14:43.751 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:43 np0005634532 nova_compute[257049]: 2026-03-01 10:14:43.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:43 np0005634532 nova_compute[257049]: 2026-03-01 10:14:43.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:44.048 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:44 np0005634532 lvm[276020]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:14:44 np0005634532 lvm[276020]: VG ceph_vg0 finished
Mar  1 05:14:44 np0005634532 lvm[276022]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:14:44 np0005634532 lvm[276022]: VG ceph_vg0 finished
Mar  1 05:14:44 np0005634532 condescending_gould[275944]: {}
Mar  1 05:14:44 np0005634532 systemd[1]: libpod-153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b.scope: Deactivated successfully.
Mar  1 05:14:44 np0005634532 podman[275928]: 2026-03-01 10:14:44.401179849 +0000 UTC m=+0.791806601 container died 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:14:44 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b12e289dae72c2ce4c88da02588c0e3ccda0b9bdaa91e0799a8bf214e6492a1c-merged.mount: Deactivated successfully.
Mar  1 05:14:44 np0005634532 podman[275928]: 2026-03-01 10:14:44.438791958 +0000 UTC m=+0.829418680 container remove 153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=condescending_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 05:14:44 np0005634532 systemd[1]: libpod-conmon-153d8ee2d7f82d7c4546ffce1246005165a4a7db6e13cf87b929d8adb307514b.scope: Deactivated successfully.
Mar  1 05:14:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:14:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:14:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:44 np0005634532 nova_compute[257049]: 2026-03-01 10:14:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v968: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:14:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:45.277 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:14:45 np0005634532 nova_compute[257049]: 2026-03-01 10:14:45.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:46.050 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:46 np0005634532 nova_compute[257049]: 2026-03-01 10:14:46.154 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:46 np0005634532 podman[276065]: 2026-03-01 10:14:46.433259039 +0000 UTC m=+0.118798326 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Mar  1 05:14:46 np0005634532 nova_compute[257049]: 2026-03-01 10:14:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.012 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.012 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.013 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.013 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.013 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.035664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087035690, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 1542, "num_deletes": 259, "total_data_size": 2947558, "memory_usage": 2988456, "flush_reason": "Manual Compaction"}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087047590, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 2833929, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 26860, "largest_seqno": 28401, "table_properties": {"data_size": 2826873, "index_size": 4063, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14772, "raw_average_key_size": 19, "raw_value_size": 2812652, "raw_average_value_size": 3715, "num_data_blocks": 179, "num_entries": 757, "num_filter_entries": 757, "num_deletions": 259, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772359951, "oldest_key_time": 1772359951, "file_creation_time": 1772360087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 11971 microseconds, and 4543 cpu microseconds.
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.047632) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 2833929 bytes OK
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.047649) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.048975) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.048990) EVENT_LOG_v1 {"time_micros": 1772360087048985, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.049018) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 2940957, prev total WAL file size 2940957, number of live WAL files 2.
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.049572) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353034' seq:72057594037927935, type:22 .. '6C6F676D00373539' seq:0, type:0; will stop at (end)
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(2767KB)], [59(13MB)]
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087049643, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 17407702, "oldest_snapshot_seqno": -1}
Mar  1 05:14:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 6036 keys, 17255437 bytes, temperature: kUnknown
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087124302, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 17255437, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 17211937, "index_size": 27326, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15109, "raw_key_size": 153620, "raw_average_key_size": 25, "raw_value_size": 17100157, "raw_average_value_size": 2833, "num_data_blocks": 1120, "num_entries": 6036, "num_filter_entries": 6036, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360087, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.124742) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 17255437 bytes
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.126557) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 232.4 rd, 230.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 13.9 +0.0 blob) out(16.5 +0.0 blob), read-write-amplify(12.2) write-amplify(6.1) OK, records in: 6572, records dropped: 536 output_compression: NoCompression
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.126587) EVENT_LOG_v1 {"time_micros": 1772360087126573, "job": 32, "event": "compaction_finished", "compaction_time_micros": 74918, "compaction_time_cpu_micros": 43026, "output_level": 6, "num_output_files": 1, "total_output_size": 17255437, "num_input_records": 6572, "num_output_records": 6036, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087127577, "job": 32, "event": "table_file_deletion", "file_number": 61}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360087129785, "job": 32, "event": "table_file_deletion", "file_number": 59}
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.049483) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.129940) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.129944) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.129946) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.129948) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:14:47.129950) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v969: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:14:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:47.263Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:14:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:47.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612850764' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.504 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:14:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.669 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.670 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4517MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.670 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.670 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:14:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.742 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.743 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:14:47 np0005634532 nova_compute[257049]: 2026-03-01 10:14:47.763 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:14:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:14:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:48.052 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:14:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:14:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2349630020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.199 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.206 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.225 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.258 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.258 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:14:48 np0005634532 nova_compute[257049]: 2026-03-01 10:14:48.754 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v970: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.255 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.255 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:49.283 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:49 np0005634532 podman[276138]: 2026-03-01 10:14:49.365060265 +0000 UTC m=+0.049810121 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:14:49 np0005634532 nova_compute[257049]: 2026-03-01 10:14:49.989 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:14:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:50.054 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:51 np0005634532 nova_compute[257049]: 2026-03-01 10:14:51.156 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v971: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:14:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:14:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:51.286 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:14:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:52.056 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v972: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Mar  1 05:14:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:53.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:53 np0005634532 nova_compute[257049]: 2026-03-01 10:14:53.757 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:53 np0005634532 nova_compute[257049]: 2026-03-01 10:14:53.985 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:14:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:54.058 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v973: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Mar  1 05:14:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:55.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:56.060 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:56 np0005634532 nova_compute[257049]: 2026-03-01 10:14:56.193 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:14:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:57] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Mar  1 05:14:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:14:57] "GET /metrics HTTP/1.1" 200 48480 "" "Prometheus/2.51.0"
Mar  1 05:14:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v974: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Mar  1 05:14:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:14:57.264Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:14:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:57.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:14:58.062 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:14:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:14:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/139172831' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:14:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:14:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/139172831' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:14:58 np0005634532 nova_compute[257049]: 2026-03-01 10:14:58.759 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:14:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:14:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:14:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:14:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:14:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:14:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v975: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Mar  1 05:14:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:14:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:14:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:14:59.299 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:00.064 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:01 np0005634532 nova_compute[257049]: 2026-03-01 10:15:01.195 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v976: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Mar  1 05:15:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.003000074s ======
Mar  1 05:15:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:01.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.003000074s
Mar  1 05:15:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:02.066 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:15:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:15:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v977: 353 pgs: 353 active+clean; 88 MiB data, 324 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 74 op/s
Mar  1 05:15:03 np0005634532 ovn_controller[157082]: 2026-03-01T10:15:03Z|00072|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Mar  1 05:15:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:03.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:03 np0005634532 nova_compute[257049]: 2026-03-01 10:15:03.760 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:04.068 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v978: 353 pgs: 353 active+clean; 109 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.0 MiB/s wr, 114 op/s
Mar  1 05:15:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:05.310 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:06.070 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:06 np0005634532 nova_compute[257049]: 2026-03-01 10:15:06.198 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:15:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:07] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:15:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v979: 353 pgs: 353 active+clean; 109 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Mar  1 05:15:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:07.265Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:15:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:07.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:08.072 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:08 np0005634532 nova_compute[257049]: 2026-03-01 10:15:08.762 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v980: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:15:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:09.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:10.074 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:11 np0005634532 nova_compute[257049]: 2026-03-01 10:15:11.199 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v981: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:15:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:11.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:12.076 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v982: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Mar  1 05:15:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:13.322 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:13 np0005634532 nova_compute[257049]: 2026-03-01 10:15:13.763 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:14.078 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v983: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 306 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Mar  1 05:15:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:15.208 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:15:15 np0005634532 nova_compute[257049]: 2026-03-01 10:15:15.209 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:15 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:15.209 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:15:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:15.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:16.080 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:16 np0005634532 nova_compute[257049]: 2026-03-01 10:15:16.199 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:17] "GET /metrics HTTP/1.1" 200 48475 "" "Prometheus/2.51.0"
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v984: 353 pgs: 353 active+clean; 121 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 154 KiB/s rd, 107 KiB/s wr, 22 op/s
Mar  1 05:15:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:17.266Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:15:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:17.266Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:15:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:17.266Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:15:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:17.327 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:17 np0005634532 podman[276214]: 2026-03-01 10:15:17.410324188 +0000 UTC m=+0.094376903 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:15:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:15:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:15:17
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', 'vms', '.mgr', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'default.rgw.control', 'backups', '.nfs']
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007589550978381194 of space, bias 1.0, pg target 0.22768652935143582 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:15:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:15:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:18.082 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:18 np0005634532 nova_compute[257049]: 2026-03-01 10:15:18.764 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v985: 353 pgs: 353 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 173 KiB/s rd, 110 KiB/s wr, 50 op/s
Mar  1 05:15:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:19.330 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:15:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:15:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:20.084 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:20 np0005634532 podman[276269]: 2026-03-01 10:15:20.375649573 +0000 UTC m=+0.065012207 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Mar  1 05:15:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v986: 353 pgs: 353 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Mar  1 05:15:21 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:21.211 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:15:21 np0005634532 nova_compute[257049]: 2026-03-01 10:15:21.242 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:21.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:22.086 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v987: 353 pgs: 353 active+clean; 41 MiB data, 349 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Mar  1 05:15:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:23.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:23 np0005634532 nova_compute[257049]: 2026-03-01 10:15:23.765 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:23.888 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:15:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:23.889 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:15:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:15:23.889 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:15:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:24.088 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v988: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 29 op/s
Mar  1 05:15:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:25.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:26.090 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:26 np0005634532 nova_compute[257049]: 2026-03-01 10:15:26.244 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:15:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:27] "GET /metrics HTTP/1.1" 200 48477 "" "Prometheus/2.51.0"
Mar  1 05:15:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v989: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Mar  1 05:15:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:27.267Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:15:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:27.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:28.092 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:28 np0005634532 systemd[1]: virtsecretd.service: Deactivated successfully.
Mar  1 05:15:28 np0005634532 nova_compute[257049]: 2026-03-01 10:15:28.768 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v990: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 3.2 KiB/s wr, 28 op/s
Mar  1 05:15:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:29.347 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:30.094 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v991: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Mar  1 05:15:31 np0005634532 nova_compute[257049]: 2026-03-01 10:15:31.246 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:31.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:32.096 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:15:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:15:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v992: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 597 B/s rd, 0 B/s wr, 0 op/s
Mar  1 05:15:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:33.353 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:33 np0005634532 nova_compute[257049]: 2026-03-01 10:15:33.770 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:34.098 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v993: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 852 B/s rd, 0 B/s wr, 1 op/s
Mar  1 05:15:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:35.356 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:36.101 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:36 np0005634532 nova_compute[257049]: 2026-03-01 10:15:36.249 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:15:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:37] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:15:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v994: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:37.269Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:15:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:37.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:38.102 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=cleanup t=2026-03-01T10:15:38.590481469Z level=info msg="Completed cleanup jobs" duration=31.133108ms
Mar  1 05:15:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugins.update.checker t=2026-03-01T10:15:38.669056721Z level=info msg="Update check succeeded" duration=52.977929ms
Mar  1 05:15:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana.update.checker t=2026-03-01T10:15:38.680556365Z level=info msg="Update check succeeded" duration=51.491552ms
Mar  1 05:15:38 np0005634532 nova_compute[257049]: 2026-03-01 10:15:38.772 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v995: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:15:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:39.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:40.104 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v996: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:41 np0005634532 nova_compute[257049]: 2026-03-01 10:15:41.250 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:41.365 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:42.106 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v997: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:43.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:43 np0005634532 nova_compute[257049]: 2026-03-01 10:15:43.774 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:43 np0005634532 nova_compute[257049]: 2026-03-01 10:15:43.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:43 np0005634532 nova_compute[257049]: 2026-03-01 10:15:43.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:43 np0005634532 nova_compute[257049]: 2026-03-01 10:15:43.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:15:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:44.108 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:44 np0005634532 nova_compute[257049]: 2026-03-01 10:15:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:44 np0005634532 nova_compute[257049]: 2026-03-01 10:15:44.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:44 np0005634532 nova_compute[257049]: 2026-03-01 10:15:44.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v998: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:15:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:45.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:15:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:15:45 np0005634532 nova_compute[257049]: 2026-03-01 10:15:45.988 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:45 np0005634532 nova_compute[257049]: 2026-03-01 10:15:45.988 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Mar  1 05:15:46 np0005634532 nova_compute[257049]: 2026-03-01 10:15:46.010 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Mar  1 05:15:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:46.110 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:46 np0005634532 nova_compute[257049]: 2026-03-01 10:15:46.251 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.439982942 +0000 UTC m=+0.037914588 container create 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:15:46 np0005634532 systemd[1]: Started libpod-conmon-3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb.scope.
Mar  1 05:15:46 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.515952458 +0000 UTC m=+0.113884124 container init 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.423081614 +0000 UTC m=+0.021013280 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.522612363 +0000 UTC m=+0.120544009 container start 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.526704304 +0000 UTC m=+0.124635970 container attach 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:46 np0005634532 sleepy_easley[276599]: 167 167
Mar  1 05:15:46 np0005634532 systemd[1]: libpod-3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb.scope: Deactivated successfully.
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.528899698 +0000 UTC m=+0.126831354 container died 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:15:46 np0005634532 systemd[1]: var-lib-containers-storage-overlay-88b37ceae72cce919b8059701e6c81f6c90b76c51a5cca84d19354cf87e0e40b-merged.mount: Deactivated successfully.
Mar  1 05:15:46 np0005634532 podman[276582]: 2026-03-01 10:15:46.569161253 +0000 UTC m=+0.167092899 container remove 3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sleepy_easley, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1)
Mar  1 05:15:46 np0005634532 systemd[1]: libpod-conmon-3f0c23d61125766a5476256c1eb83bdc6d5735fba044ef88d60753fe0839eadb.scope: Deactivated successfully.
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:46 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:15:46 np0005634532 podman[276622]: 2026-03-01 10:15:46.727434573 +0000 UTC m=+0.048961931 container create aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:15:46 np0005634532 systemd[1]: Started libpod-conmon-aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb.scope.
Mar  1 05:15:46 np0005634532 podman[276622]: 2026-03-01 10:15:46.703199214 +0000 UTC m=+0.024726572 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:46 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:46 np0005634532 podman[276622]: 2026-03-01 10:15:46.815499948 +0000 UTC m=+0.137027346 container init aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:15:46 np0005634532 podman[276622]: 2026-03-01 10:15:46.823804603 +0000 UTC m=+0.145331961 container start aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:15:46 np0005634532 podman[276622]: 2026-03-01 10:15:46.827379962 +0000 UTC m=+0.148907310 container attach aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:15:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:47] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:15:47 np0005634532 competent_morse[276638]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:15:47 np0005634532 competent_morse[276638]: --> All data devices are unavailable
Mar  1 05:15:47 np0005634532 systemd[1]: libpod-aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb.scope: Deactivated successfully.
Mar  1 05:15:47 np0005634532 podman[276622]: 2026-03-01 10:15:47.138656411 +0000 UTC m=+0.460183809 container died aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 05:15:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-aa243f54204b980a061105005f809498d59a5fad6a869682edefc15a03e3f2b3-merged.mount: Deactivated successfully.
Mar  1 05:15:47 np0005634532 podman[276622]: 2026-03-01 10:15:47.184195956 +0000 UTC m=+0.505723354 container remove aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=competent_morse, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:47 np0005634532 systemd[1]: libpod-conmon-aea3a7dc45e642bedafffdfd82623011ca4f5041de91a59fafb68232437188bb.scope: Deactivated successfully.
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v999: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:47.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:15:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:47.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:15:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:15:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.709402841 +0000 UTC m=+0.043223319 container create e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 05:15:47 np0005634532 systemd[1]: Started libpod-conmon-e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4.scope.
Mar  1 05:15:47 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.770578502 +0000 UTC m=+0.104399000 container init e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.776330874 +0000 UTC m=+0.110151352 container start e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.779457082 +0000 UTC m=+0.113277560 container attach e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:15:47 np0005634532 priceless_banzai[276777]: 167 167
Mar  1 05:15:47 np0005634532 systemd[1]: libpod-e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4.scope: Deactivated successfully.
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.78141349 +0000 UTC m=+0.115233978 container died e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.689174311 +0000 UTC m=+0.022994869 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a9c619d2fdfff4b314061db8c5599c6dce67bc0fbdcfc8041421bc65ace9c5f5-merged.mount: Deactivated successfully.
Mar  1 05:15:47 np0005634532 podman[276759]: 2026-03-01 10:15:47.820603148 +0000 UTC m=+0.154423626 container remove e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=priceless_banzai, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:15:47 np0005634532 podman[276776]: 2026-03-01 10:15:47.821954682 +0000 UTC m=+0.076136692 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:15:47 np0005634532 systemd[1]: libpod-conmon-e0982390840da3d89697045da6a961c41f71e221ae7f36789e0264dfbf89e7e4.scope: Deactivated successfully.
Mar  1 05:15:47 np0005634532 podman[276828]: 2026-03-01 10:15:47.935699322 +0000 UTC m=+0.037552539 container create 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:15:47 np0005634532 systemd[1]: Started libpod-conmon-0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f.scope.
Mar  1 05:15:47 np0005634532 nova_compute[257049]: 2026-03-01 10:15:47.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:47 np0005634532 nova_compute[257049]: 2026-03-01 10:15:47.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:47 np0005634532 nova_compute[257049]: 2026-03-01 10:15:47.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:47 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c233b585d09140eec377c04f9b777f1b1506e8c99b13099c34fa1aa4e79d88/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c233b585d09140eec377c04f9b777f1b1506e8c99b13099c34fa1aa4e79d88/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c233b585d09140eec377c04f9b777f1b1506e8c99b13099c34fa1aa4e79d88/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:47 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1c233b585d09140eec377c04f9b777f1b1506e8c99b13099c34fa1aa4e79d88/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.003 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.004 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.004 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.004 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.005 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:48.008466689 +0000 UTC m=+0.110319896 container init 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:47.920801803 +0000 UTC m=+0.022655000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:48.015153244 +0000 UTC m=+0.117006421 container start 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:48.019729037 +0000 UTC m=+0.121582254 container attach 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:15:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:48.112 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:48 np0005634532 funny_dirac[276845]: {
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:    "0": [
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:        {
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "devices": [
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "/dev/loop3"
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            ],
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "lv_name": "ceph_lv0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "lv_size": "21470642176",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "name": "ceph_lv0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "tags": {
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.cluster_name": "ceph",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.crush_device_class": "",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.encrypted": "0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.osd_id": "0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.type": "block",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.vdo": "0",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:                "ceph.with_tpm": "0"
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            },
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "type": "block",
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:            "vg_name": "ceph_vg0"
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:        }
Mar  1 05:15:48 np0005634532 funny_dirac[276845]:    ]
Mar  1 05:15:48 np0005634532 funny_dirac[276845]: }
Mar  1 05:15:48 np0005634532 systemd[1]: libpod-0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f.scope: Deactivated successfully.
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:48.764412024 +0000 UTC m=+0.866265201 container died 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.775 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d1c233b585d09140eec377c04f9b777f1b1506e8c99b13099c34fa1aa4e79d88-merged.mount: Deactivated successfully.
Mar  1 05:15:48 np0005634532 podman[276828]: 2026-03-01 10:15:48.802678479 +0000 UTC m=+0.904531676 container remove 0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_dirac, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Mar  1 05:15:48 np0005634532 systemd[1]: libpod-conmon-0d83abc5a4041d540ab31bcb2bee9013be3bed481684c0cad5e8ceb3e7c9485f.scope: Deactivated successfully.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.878086) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148878131, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 1086, "num_deletes": 501, "total_data_size": 1313749, "memory_usage": 1347040, "flush_reason": "Manual Compaction"}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148884601, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 1023763, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 28402, "largest_seqno": 29487, "table_properties": {"data_size": 1019253, "index_size": 1586, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14156, "raw_average_key_size": 19, "raw_value_size": 1008041, "raw_average_value_size": 1400, "num_data_blocks": 67, "num_entries": 720, "num_filter_entries": 720, "num_deletions": 501, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360088, "oldest_key_time": 1772360088, "file_creation_time": 1772360148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 6569 microseconds, and 3496 cpu microseconds.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.884653) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 1023763 bytes OK
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.884673) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.886567) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.886586) EVENT_LOG_v1 {"time_micros": 1772360148886580, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.886606) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 1307686, prev total WAL file size 1307686, number of live WAL files 2.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.887070) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(999KB)], [62(16MB)]
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148887138, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 18279200, "oldest_snapshot_seqno": -1}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5752 keys, 12428263 bytes, temperature: kUnknown
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148930437, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 12428263, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12391958, "index_size": 20835, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 14405, "raw_key_size": 148839, "raw_average_key_size": 25, "raw_value_size": 12290334, "raw_average_value_size": 2136, "num_data_blocks": 835, "num_entries": 5752, "num_filter_entries": 5752, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360148, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.931062) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 12428263 bytes
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.932300) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 417.7 rd, 284.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.0, 16.5 +0.0 blob) out(11.9 +0.0 blob), read-write-amplify(30.0) write-amplify(12.1) OK, records in: 6756, records dropped: 1004 output_compression: NoCompression
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.932316) EVENT_LOG_v1 {"time_micros": 1772360148932309, "job": 34, "event": "compaction_finished", "compaction_time_micros": 43757, "compaction_time_cpu_micros": 21124, "output_level": 6, "num_output_files": 1, "total_output_size": 12428263, "num_input_records": 6756, "num_output_records": 5752, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148932488, "job": 34, "event": "table_file_deletion", "file_number": 64}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360148933686, "job": 34, "event": "table_file_deletion", "file_number": 62}
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.886965) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.933795) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.933801) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.933803) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.933805) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:15:48.933807) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:15:48 np0005634532 nova_compute[257049]: 2026-03-01 10:15:48.969 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.964s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:15:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.124 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.125 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4521MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.125 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.125 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:15:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1000: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.228 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.229 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.304 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.355818113 +0000 UTC m=+0.040183234 container create 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:15:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:49.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:49 np0005634532 systemd[1]: Started libpod-conmon-03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078.scope.
Mar  1 05:15:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.33668054 +0000 UTC m=+0.021045711 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.442792041 +0000 UTC m=+0.127157182 container init 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.447881047 +0000 UTC m=+0.132246168 container start 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.450454291 +0000 UTC m=+0.134819412 container attach 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:15:49 np0005634532 friendly_fermat[276996]: 167 167
Mar  1 05:15:49 np0005634532 systemd[1]: libpod-03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078.scope: Deactivated successfully.
Mar  1 05:15:49 np0005634532 conmon[276996]: conmon 03b5a2d3c985b1c26dd4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078.scope/container/memory.events
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.454789628 +0000 UTC m=+0.139154749 container died 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6316410884b51f27a833567b71aee240cb25a844b913c0c49937b0b6d2ec9dc4-merged.mount: Deactivated successfully.
Mar  1 05:15:49 np0005634532 podman[276979]: 2026-03-01 10:15:49.486979823 +0000 UTC m=+0.171344944 container remove 03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=friendly_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:15:49 np0005634532 systemd[1]: libpod-conmon-03b5a2d3c985b1c26dd446f226b5dc570d55d6b6212d62cd2f3aadb106b24078.scope: Deactivated successfully.
Mar  1 05:15:49 np0005634532 podman[277039]: 2026-03-01 10:15:49.624564582 +0000 UTC m=+0.039617810 container create b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:15:49 np0005634532 systemd[1]: Started libpod-conmon-b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9.scope.
Mar  1 05:15:49 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:15:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7633dcaf4121c76ba99ab3355ba4c58737dc6a04035ac7b51650e42faa1e1e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7633dcaf4121c76ba99ab3355ba4c58737dc6a04035ac7b51650e42faa1e1e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7633dcaf4121c76ba99ab3355ba4c58737dc6a04035ac7b51650e42faa1e1e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:49 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7633dcaf4121c76ba99ab3355ba4c58737dc6a04035ac7b51650e42faa1e1e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:15:49 np0005634532 podman[277039]: 2026-03-01 10:15:49.606745752 +0000 UTC m=+0.021799000 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:15:49 np0005634532 podman[277039]: 2026-03-01 10:15:49.713639132 +0000 UTC m=+0.128692390 container init b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 05:15:49 np0005634532 podman[277039]: 2026-03-01 10:15:49.718293867 +0000 UTC m=+0.133347115 container start b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:15:49 np0005634532 podman[277039]: 2026-03-01 10:15:49.721544728 +0000 UTC m=+0.136597956 container attach b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 05:15:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:15:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2790994237' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.758 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.765 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.779 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.781 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.781 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.782 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:49 np0005634532 nova_compute[257049]: 2026-03-01 10:15:49.782 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Mar  1 05:15:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:50.114 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:50 np0005634532 lvm[277133]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:15:50 np0005634532 lvm[277133]: VG ceph_vg0 finished
Mar  1 05:15:50 np0005634532 determined_haibt[277055]: {}
Mar  1 05:15:50 np0005634532 systemd[1]: libpod-b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9.scope: Deactivated successfully.
Mar  1 05:15:50 np0005634532 podman[277039]: 2026-03-01 10:15:50.355860228 +0000 UTC m=+0.770913456 container died b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:15:50 np0005634532 systemd[1]: var-lib-containers-storage-overlay-eb7633dcaf4121c76ba99ab3355ba4c58737dc6a04035ac7b51650e42faa1e1e-merged.mount: Deactivated successfully.
Mar  1 05:15:50 np0005634532 podman[277039]: 2026-03-01 10:15:50.393976199 +0000 UTC m=+0.809029427 container remove b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_haibt, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Mar  1 05:15:50 np0005634532 systemd[1]: libpod-conmon-b3407d651016c948a1f86cd8dd50cc0ffff24f458dc10f92efd9febd7b942fb9.scope: Deactivated successfully.
Mar  1 05:15:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:15:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:15:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:50 np0005634532 podman[277150]: 2026-03-01 10:15:50.491837887 +0000 UTC m=+0.052118669 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:15:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1001: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:51 np0005634532 nova_compute[257049]: 2026-03-01 10:15:51.253 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:51.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:51 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:15:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:15:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:52.116 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:15:52 np0005634532 nova_compute[257049]: 2026-03-01 10:15:52.791 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:52 np0005634532 nova_compute[257049]: 2026-03-01 10:15:52.791 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:15:52 np0005634532 nova_compute[257049]: 2026-03-01 10:15:52.791 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:15:52 np0005634532 nova_compute[257049]: 2026-03-01 10:15:52.809 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:15:52 np0005634532 nova_compute[257049]: 2026-03-01 10:15:52.810 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:15:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1002: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:53.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:53 np0005634532 nova_compute[257049]: 2026-03-01 10:15:53.779 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:54.118 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1003: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:15:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:55.391 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:56.120 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:56 np0005634532 nova_compute[257049]: 2026-03-01 10:15:56.320 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:15:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:57] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:15:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:15:57] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:15:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1004: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:15:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:57.271Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:15:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:57.271Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:15:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:15:57.271Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:15:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:15:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:57.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:15:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:15:58.122 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:15:58 np0005634532 nova_compute[257049]: 2026-03-01 10:15:58.783 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:15:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:15:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:15:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:15:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:15:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:15:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1005: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:15:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:15:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:15:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:15:59.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:00.124 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1006: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:01 np0005634532 nova_compute[257049]: 2026-03-01 10:16:01.322 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:01.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:02.126 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:16:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:16:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1007: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:03.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:03 np0005634532 nova_compute[257049]: 2026-03-01 10:16:03.786 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:04.128 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1008: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:05.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:06.131 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:06 np0005634532 nova_compute[257049]: 2026-03-01 10:16:06.324 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1009: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:07.272Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:16:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:07.272Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:16:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:07.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:08.133 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:08 np0005634532 nova_compute[257049]: 2026-03-01 10:16:08.790 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1010: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:09.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:10.135 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1011: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:11 np0005634532 nova_compute[257049]: 2026-03-01 10:16:11.326 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:11.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:16:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:12.137 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:16:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1012: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:13.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:13 np0005634532 nova_compute[257049]: 2026-03-01 10:16:13.794 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:14.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1013: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:15.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:16.142 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:16 np0005634532 nova_compute[257049]: 2026-03-01 10:16:16.327 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1014: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:17.273Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:16:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:17.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:16:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:16:17
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', '.nfs', 'volumes', 'vms', 'default.rgw.meta', '.mgr', 'images', 'cephfs.cephfs.data', 'default.rgw.log']
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:16:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:16:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:18.144 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:18 np0005634532 podman[277251]: 2026-03-01 10:16:18.389271625 +0000 UTC m=+0.081000712 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:16:18 np0005634532 nova_compute[257049]: 2026-03-01 10:16:18.797 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1015: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:19.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:16:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:16:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:20.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1016: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:21 np0005634532 nova_compute[257049]: 2026-03-01 10:16:21.329 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:21 np0005634532 podman[277304]: 2026-03-01 10:16:21.341810765 +0000 UTC m=+0.037501747 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, tcib_managed=true, container_name=ovn_metadata_agent)
Mar  1 05:16:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:16:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:21.441 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:16:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:22.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1017: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:23.444 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:23 np0005634532 nova_compute[257049]: 2026-03-01 10:16:23.801 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:16:23.889 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:16:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:16:23.890 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:16:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:16:23.890 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:16:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:24.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1018: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:16:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:25.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:16:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:26.152 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:26 np0005634532 nova_compute[257049]: 2026-03-01 10:16:26.332 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:27] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:16:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:27] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:16:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1019: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:27.274Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:16:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:27.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:27 np0005634532 systemd-logind[832]: New session 56 of user zuul.
Mar  1 05:16:27 np0005634532 systemd[1]: Started Session 56 of User zuul.
Mar  1 05:16:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:16:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:28.155 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:16:28 np0005634532 nova_compute[257049]: 2026-03-01 10:16:28.806 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1020: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:29.457 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:29 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16194 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:30 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25675 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:30.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:30 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25589 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:30 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16203 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:30 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25681 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:30 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25595 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Mar  1 05:16:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3817333647' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Mar  1 05:16:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1021: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:31 np0005634532 nova_compute[257049]: 2026-03-01 10:16:31.334 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:31.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:32.159 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:16:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:16:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1022: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:33.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:33 np0005634532 nova_compute[257049]: 2026-03-01 10:16:33.809 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:34.161 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1023: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:35.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:36 np0005634532 ovs-vsctl[277711]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Mar  1 05:16:36 np0005634532 nova_compute[257049]: 2026-03-01 10:16:36.335 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:36 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Mar  1 05:16:37 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Mar  1 05:16:37 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Mar  1 05:16:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:16:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:16:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1024: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:37.276Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:16:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000049s ======
Mar  1 05:16:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:37.471 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Mar  1 05:16:37 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: cache status {prefix=cache status} (starting...)
Mar  1 05:16:37 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:37 np0005634532 lvm[278061]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:16:37 np0005634532 lvm[278061]: VG ceph_vg0 finished
Mar  1 05:16:37 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: client ls {prefix=client ls} (starting...)
Mar  1 05:16:37 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:38.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16221 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25696 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: damage ls {prefix=damage ls} (starting...)
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3661180824' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump loads {prefix=dump loads} (starting...)
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25607 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16233 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25714 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/926501535' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 nova_compute[257049]: 2026-03-01 10:16:38.812 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Mar  1 05:16:38 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:16:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:16:38 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25628 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25732 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25726 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478693284' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1025: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25646 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: get subtrees {prefix=get subtrees} (starting...)
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25750 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16266 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:39.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: ops {prefix=ops} (starting...)
Mar  1 05:16:39 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797313202' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25655 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Mar  1 05:16:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876243032' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Mar  1 05:16:39 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16284 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:40.167 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:40 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: session ls {prefix=session ls} (starting...)
Mar  1 05:16:40 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3182808290' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Mar  1 05:16:40 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: status {prefix=status} (starting...)
Mar  1 05:16:40 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16296 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:40 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25783 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1316216741' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:16:40 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/930858400' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25801 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1026: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25694 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357952941' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Mar  1 05:16:41 np0005634532 nova_compute[257049]: 2026-03-01 10:16:41.337 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2963975377' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:16:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:41.477 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25709 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 05:16:41 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3743712702' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16341 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:41 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:16:41.756+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:41 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:42.169 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4015699920' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338732522' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Mar  1 05:16:42 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25840 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:42 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:16:42.323+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:42 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2862411483' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Mar  1 05:16:42 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/50099810' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16377 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25751 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:16:43.073+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:16:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Mar  1 05:16:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2348799250' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25876 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1027: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16392 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:16:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:43.480 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:16:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Mar  1 05:16:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1590242849' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25775 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25891 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16404 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 nova_compute[257049]: 2026-03-01 10:16:43.814 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] exit Reset 0.000108 1 0.000168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] exit Start 0.000008 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 100 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100 pruub=15.115015984s) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 242.738540649s@ mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 100 handle_osd_map epochs [100,100], i have 100, src has [1,100]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 79953920 unmapped: 4939776 heap: 84893696 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 100 handle_osd_map epochs [100,101], i have 100, src has [1,101]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/Stray 1.033631 7 0.000116
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/WaitDeleteReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/WaitDeleteReserved 0.000112 1 0.000071
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] enter Started/ToDelete/Deleting
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] lb MIN local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 DELETING pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete/Deleting 0.034818 2 0.000339
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] lb MIN local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started/ToDelete 0.034974 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 101 pg[9.12( v 42'1010 (0'0,42'1010] lb MIN local-lis/les=98/99 n=4 ec=52/35 lis/c=98/52 les/c/f=99/53/0 sis=100) [1] r=-1 lpr=100 pi=[52,100)/1 crt=42'1010 lcod 0'0 mlcod 0'0 unknown NOTIFY mbc={}] exit Started 1.068666 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fcae9000/0x0/0x4ffc00000, data 0xaa092/0x131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4898816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4898816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812381 data_alloc: 218103808 data_used: 126976
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81043456 unmapped: 4898816 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 4890624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81051648 unmapped: 4890624 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fcae9000/0x0/0x4ffc00000, data 0xaa092/0x131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81059840 unmapped: 4882432 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81068032 unmapped: 4874240 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 812381 data_alloc: 218103808 data_used: 126976
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 101 heartbeat osd_stat(store_statfs(0x4fcae9000/0x0/0x4ffc00000, data 0xaa092/0x131000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 101 handle_osd_map epochs [102,102], i have 101, src has [1,102]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 101 handle_osd_map epochs [102,102], i have 102, src has [1,102]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.404260635s of 10.463026047s, submitted: 17
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4841472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81100800 unmapped: 4841472 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 102 handle_osd_map epochs [102,103], i have 102, src has [1,103]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcae4000/0x0/0x4ffc00000, data 0xae26a/0x137000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81108992 unmapped: 4833280 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 103 heartbeat osd_stat(store_statfs(0x4fcae4000/0x0/0x4ffc00000, data 0xae26a/0x137000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 103 handle_osd_map epochs [103,104], i have 103, src has [1,104]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81117184 unmapped: 4825088 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 104 handle_osd_map epochs [104,105], i have 104, src has [1,105]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81125376 unmapped: 4816896 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 105 handle_osd_map epochs [105,106], i have 105, src has [1,106]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 825390 data_alloc: 218103808 data_used: 131072
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4808704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4808704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81133568 unmapped: 4808704 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 106 heartbeat osd_stat(store_statfs(0x4fcada000/0x0/0x4ffc00000, data 0xb41b0/0x140000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 106 handle_osd_map epochs [107,107], i have 106, src has [1,107]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 106 handle_osd_map epochs [107,107], i have 107, src has [1,107]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4792320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4784128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 107 handle_osd_map epochs [108,109], i have 107, src has [1,109]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 833928 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81149952 unmapped: 4792320 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 109 handle_osd_map epochs [109,110], i have 109, src has [1,110]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.472195625s of 10.552146912s, submitted: 24
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 110 heartbeat osd_stat(store_statfs(0x4fcace000/0x0/0x4ffc00000, data 0xbc1cd/0x14c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81158144 unmapped: 4784128 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 110 handle_osd_map epochs [110,111], i have 110, src has [1,111]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81166336 unmapped: 4775936 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 111 handle_osd_map epochs [112,112], i have 111, src has [1,112]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81174528 unmapped: 4767744 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fcaca000/0x0/0x4ffc00000, data 0xbe2d6/0x14f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81190912 unmapped: 4751360 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 844998 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4743168 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81199104 unmapped: 4743168 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 112 heartbeat osd_stat(store_statfs(0x4fcac6000/0x0/0x4ffc00000, data 0xc0278/0x152000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4734976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81207296 unmapped: 4734976 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81215488 unmapped: 4726784 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 112 handle_osd_map epochs [113,113], i have 112, src has [1,113]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 845128 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4669440 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81272832 unmapped: 4669440 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 113 handle_osd_map epochs [114,114], i have 113, src has [1,114]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.718708038s of 10.844810486s, submitted: 11
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19(unlocked)] enter Initial
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=0 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000086 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=0 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000020 1 0.000042
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000236 1 0.000071
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000051 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000321 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 114 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 114 heartbeat osd_stat(store_statfs(0x4fcac6000/0x0/0x4ffc00000, data 0xc2364/0x155000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81289216 unmapped: 4653056 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 114 handle_osd_map epochs [115,115], i have 114, src has [1,115]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 114 handle_osd_map epochs [114,115], i have 115, src has [1,115]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 114 handle_osd_map epochs [114,115], i have 115, src has [1,115]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.504416 2 0.000101
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.504793 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.504831 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=114) [0] r=0 lpr=114 pi=[75,114)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000126 1 0.000200
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000011 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 115 pg[9.19( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81297408 unmapped: 4644864 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 115 handle_osd_map epochs [116,116], i have 115, src has [1,116]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=7 mbc={}] exit Started/Stray 1.164777 5 0.000101
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=75/75 les/c/f=76/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=7 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 41'179 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.013484 4 0.000535
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 41'179 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 41'179 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000143 1 0.000102
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 lc 41'179 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=7 mbc={}] enter Started/ReplicaActive/RepRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.082476 1 0.000105
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 116 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81379328 unmapped: 4562944 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 116 heartbeat osd_stat(store_statfs(0x4fcabf000/0x0/0x4ffc00000, data 0xc6424/0x15b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 116 handle_osd_map epochs [116,117], i have 116, src has [1,117]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 117 handle_osd_map epochs [117,117], i have 117, src has [1,117]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.519689 1 0.000049
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.615971 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started 1.780825 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=115) [0]/[2] r=-1 lpr=115 pi=[75,115)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Reset 0.000101 1 0.000161
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Start 0.000010 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000378 2 0.000100
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000983 2 0.000105
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000007 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 117 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 868917 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81420288 unmapped: 4521984 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 117 handle_osd_map epochs [117,118], i have 117, src has [1,118]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 117 handle_osd_map epochs [118,118], i have 118, src has [1,118]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 117 handle_osd_map epochs [117,118], i have 118, src has [1,118]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.022240 2 0.000113
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.023742 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=115/116 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=115/75 les/c/f=116/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004271 4 0.000399
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000028 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 118 pg[9.19( v 42'1010 (0'0,42'1010] local-lis/les=117/118 n=7 ec=52/35 lis/c=117/75 les/c/f=118/76/0 sis=117) [0] r=0 lpr=117 pi=[75,117)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81428480 unmapped: 4513792 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81436672 unmapped: 4505600 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 118 handle_osd_map epochs [118,119], i have 118, src has [1,119]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 874769 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81444864 unmapped: 4497408 heap: 85942272 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a(unlocked)] enter Initial
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=0 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000075 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=0 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000022 1 0.000037
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000008 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000172 1 0.000068
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000041 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000249 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 119 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 119 handle_osd_map epochs [119,119], i have 119, src has [1,119]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 119 heartbeat osd_stat(store_statfs(0x4fcab3000/0x0/0x4ffc00000, data 0xce492/0x168000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 119 handle_osd_map epochs [119,120], i have 119, src has [1,120]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 119 handle_osd_map epochs [120,120], i have 120, src has [1,120]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 0.244620 2 0.000092
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 0.244912 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 0.244943 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=119) [0] r=0 lpr=119 pi=[77,119)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000104 1 0.000143
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 120 pg[9.1a( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81469440 unmapped: 5521408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 120 handle_osd_map epochs [120,121], i have 120, src has [1,121]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.655364990s of 10.096305847s, submitted: 47
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 120 handle_osd_map epochs [120,121], i have 121, src has [1,121]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b(unlocked)] enter Initial
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=0 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000119 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=0 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000029 1 0.000058
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000024 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000648 1 0.000110
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000070 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000765 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=4 mbc={}] exit Started/Stray 1.039680 6 0.000068
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=77/77 les/c/f=78/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=4 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 41'292 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.017826 3 0.000189
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 41'292 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 41'292 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000115 1 0.000100
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 lc 41'292 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=4 mbc={}] enter Started/ReplicaActive/RepRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.076269 1 0.000102
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 121 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 121 heartbeat osd_stat(store_statfs(0x4fcaab000/0x0/0x4ffc00000, data 0xd26fb/0x16f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81518592 unmapped: 5472256 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 121 handle_osd_map epochs [122,122], i have 121, src has [1,122]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 121 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 121 handle_osd_map epochs [121,122], i have 122, src has [1,122]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 121 handle_osd_map epochs [122,122], i have 122, src has [1,122]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.137738 2 0.000149
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.138556 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.138615 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=121) [0] r=0 lpr=121 pi=[62,121)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000122 1 0.000209
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000014 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1b( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 1.039877 1 0.000047
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.134256 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started 2.173982 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=120) [0]/[1] r=-1 lpr=120 pi=[77,120)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Reset 0.000072 1 0.000121
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Start 0.000015 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000071 1 0.000099
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=23
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=23
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002827 3 0.000086
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000021 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 122 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81526784 unmapped: 5464064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 122 heartbeat osd_stat(store_statfs(0x4fcaa7000/0x0/0x4ffc00000, data 0xd46ec/0x172000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 5455872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 894486 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81534976 unmapped: 5455872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 122 ms_handle_reset con 0x55d023a0f000 session 0x55d025aa03c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 122 ms_handle_reset con 0x55d023758800 session 0x55d026230960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: mgrc handle_mgr_map Got map version 34
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: mgrc handle_mgr_map Active mgr is now [v2:192.168.122.100:6800/2106645066,v1:192.168.122.100:6801/2106645066]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 122 handle_osd_map epochs [123,123], i have 122, src has [1,123]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=2 mbc={}] exit Started/Stray 3.918018 5 0.000083
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=62/62 les/c/f=63/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=2 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 3.915180 2 0.001071
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 3.918225 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=120/121 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 123 handle_osd_map epochs [123,123], i have 123, src has [1,123]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=120/77 les/c/f=121/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=122/77 les/c/f=123/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.006148 4 0.000302
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=122/77 les/c/f=123/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=122/77 les/c/f=123/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1a( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=4 ec=52/35 lis/c=122/77 les/c/f=123/78/0 sis=122) [0] r=0 lpr=122 pi=[77,122)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 41'531 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007747 4 0.000181
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 41'531 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 41'531 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000091 1 0.000045
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 lc 41'531 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=2 mbc={}] enter Started/ReplicaActive/RepRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81616896 unmapped: 5373952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.026261 1 0.000041
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 123 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 123 heartbeat osd_stat(store_statfs(0x4fcaa5000/0x0/0x4ffc00000, data 0xd671d/0x176000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 123 handle_osd_map epochs [124,124], i have 123, src has [1,124]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.973796 1 0.000133
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 1.008045 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started 4.926132 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=122) [0]/[2] r=-1 lpr=122 pi=[62,122)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Reset 0.000090 1 0.000146
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000046 1 0.000047
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=14
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=14
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001759 3 0.000176
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000013 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 124 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 5365760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 124 heartbeat osd_stat(store_statfs(0x4fcaa2000/0x0/0x4ffc00000, data 0xd8826/0x179000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 124 handle_osd_map epochs [124,125], i have 124, src has [1,125]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 124 handle_osd_map epochs [125,125], i have 125, src has [1,125]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.010062 2 0.000108
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.011984 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=122/123 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=122/62 les/c/f=123/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=124/62 les/c/f=125/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.004832 4 0.000332
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=124/62 les/c/f=125/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=124/62 les/c/f=125/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000022 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 125 pg[9.1b( v 42'1010 (0'0,42'1010] local-lis/les=124/125 n=2 ec=52/35 lis/c=124/62 les/c/f=125/63/0 sis=124) [0] r=0 lpr=124 pi=[62,124)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 5365760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 125 handle_osd_map epochs [125,126], i have 125, src has [1,126]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81625088 unmapped: 5365760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 908981 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 126 heartbeat osd_stat(store_statfs(0x4fca9a000/0x0/0x4ffc00000, data 0xdc8b4/0x17f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 126 handle_osd_map epochs [127,127], i have 126, src has [1,127]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 126 handle_osd_map epochs [127,127], i have 127, src has [1,127]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81633280 unmapped: 5357568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 127 handle_osd_map epochs [127,128], i have 127, src has [1,128]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 128 handle_osd_map epochs [128,128], i have 128, src has [1,128]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e(unlocked)] enter Initial
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=0 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000066 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=0 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000015 1 0.000032
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000009 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000142 1 0.000051
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000033 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000198 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 128 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 128 heartbeat osd_stat(store_statfs(0x4fca99000/0x0/0x4ffc00000, data 0xde888/0x182000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81674240 unmapped: 5316608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 128 handle_osd_map epochs [128,129], i have 128, src has [1,129]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.115310669s of 10.383176804s, submitted: 59
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.003556 2 0.000066
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.003802 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.003832 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=128) [0] r=0 lpr=128 pi=[69,128)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000085 1 0.000144
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000008 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 129 pg[9.1e( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 129 handle_osd_map epochs [129,129], i have 129, src has [1,129]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 5300224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 129 handle_osd_map epochs [130,130], i have 129, src has [1,130]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.251107 5 0.000062
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=69/69 les/c/f=70/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 41'632 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.007773 4 0.000199
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 41'632 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 41'632 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.000061 1 0.000108
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 lc 41'632 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.043725 1 0.000045
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 130 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 130 heartbeat osd_stat(store_statfs(0x4fca91000/0x0/0x4ffc00000, data 0xe2986/0x188000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81666048 unmapped: 5324800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 130 handle_osd_map epochs [131,131], i have 130, src has [1,131]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.880585 1 0.000198
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.932431 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started 2.183592 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=129) [0]/[1] r=-1 lpr=129 pi=[69,129)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Reset 0.000124 1 0.000209
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Start 0.000012 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000116 1 0.000117
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=25
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=25
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.001534 3 0.000111
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000015 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 131 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81682432 unmapped: 5308416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 931623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 131 handle_osd_map epochs [131,132], i have 131, src has [1,132]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 132 handle_osd_map epochs [132,132], i have 132, src has [1,132]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.017580 2 0.000123
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.019358 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=129/130 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=129/69 les/c/f=130/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=131/69 les/c/f=132/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.005949 4 0.000204
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=131/69 les/c/f=132/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=131/69 les/c/f=132/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000017 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 132 pg[9.1e( v 42'1010 (0'0,42'1010] local-lis/les=131/132 n=5 ec=52/35 lis/c=131/69 les/c/f=132/70/0 sis=131) [0] r=0 lpr=131 pi=[69,131)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 5300224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81690624 unmapped: 5300224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81698816 unmapped: 5292032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 132 handle_osd_map epochs [133,133], i have 132, src has [1,133]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f(unlocked)] enter Initial
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=0 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Initial 0.000084 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=0 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Reset 0.000024 1 0.000041
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Start 0.000007 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000239 1 0.000065
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.000044 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 0.000342 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 133 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Started/Primary/WaitActingChange
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81715200 unmapped: 5275648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 133 heartbeat osd_stat(store_statfs(0x4fca88000/0x0/0x4ffc00000, data 0xe88a2/0x191000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 133 handle_osd_map epochs [133,134], i have 133, src has [1,134]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary/WaitActingChange 1.017206 2 0.000142
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started/Primary 1.017671 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] exit Started 1.017718 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=133) [0] r=0 lpr=133 pi=[87,133)/1 crt=0'0 mlcod 0'0 unknown mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Reset 0.000377 1 0.000547
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] exit Start 0.000061 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 134 pg[9.1f( empty local-lis/les=0/0 n=0 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] enter Started/Stray
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 134 handle_osd_map epochs [134,134], i have 134, src has [1,134]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81731584 unmapped: 5259264 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 941123 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81747968 unmapped: 5242880 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 134 handle_osd_map epochs [135,135], i have 134, src has [1,135]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=0 log.dups.size()=0 olog.dups.size()=0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] exit Started/Stray 1.192398 5 0.000158
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 0'0 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=87/87 les/c/f=88/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 crt=42'1010 mlcod 0'0 remapped NOTIFY m=5 mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 41'516 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.005783 4 0.000339
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 41'516 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepWaitRecoveryReserved
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 41'516 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] exit Started/ReplicaActive/RepWaitRecoveryReserved 0.001169 1 0.000091
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 lc 41'516 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 lcod 0'0 mlcod 0'0 active+remapped m=5 mbc={}] enter Started/ReplicaActive/RepRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepRecovering 0.044662 1 0.000105
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 135 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Started/ReplicaActive/RepNotRecovering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 135 heartbeat osd_stat(store_statfs(0x4fca84000/0x0/0x4ffc00000, data 0xec962/0x197000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81764352 unmapped: 5226496 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 135 handle_osd_map epochs [136,136], i have 135, src has [1,136]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive/RepNotRecovering 0.813985 1 0.000104
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started/ReplicaActive 0.866053 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] exit Started 2.058611 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=134) [0]/[1] r=-1 lpr=134 pi=[87,134)/1 luod=0'0 crt=42'1010 mlcod 0'0 active+remapped mbc={}] enter Reset
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 luod=0'0 crt=42'1010 mlcod 0'0 active mbc={}] PeeringState::start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540701547738038271 upacting 4540701547738038271
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Reset 0.000502 1 0.000797
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Start
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] exit Start 0.000048 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Peering
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetInfo
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetInfo 0.000064 1 0.000391
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=0/0 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetLog
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: merge_log_dups log.dups.size()=0olog.dups.size()=29
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: end of merge_log_dups changed=1 log.dups.size()=0 olog.dups.size()=29
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetLog 0.002331 3 0.000106
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/GetMissing
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/GetMissing 0.000018 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 136 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] enter Started/Primary/Peering/WaitUpThru
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81772544 unmapped: 5218304 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 136 handle_osd_map epochs [136,137], i have 136, src has [1,137]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.686526299s of 10.819543839s, submitted: 73
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering/WaitUpThru 1.016312 2 0.000183
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 peering mbc={}] exit Started/Primary/Peering 1.018893 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=134/135 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 unknown mbc={}] enter Started/Primary/Active
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 activating mbc={}] enter Started/Primary/Active/Activating
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 handle_osd_map epochs [137,137], i have 137, src has [1,137]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=134/87 les/c/f=135/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=136/87 les/c/f=137/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Activating 0.009796 4 0.000423
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=136/87 les/c/f=137/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Recovered
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=136/87 les/c/f=137/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] exit Started/Primary/Active/Recovered 0.000019 0 0.000000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 pg_epoch: 137 pg[9.1f( v 42'1010 (0'0,42'1010] local-lis/les=136/137 n=5 ec=52/35 lis/c=136/87 les/c/f=137/88/0 sis=136) [0] r=0 lpr=136 pi=[87,136)/1 crt=42'1010 mlcod 0'0 active mbc={}] enter Started/Primary/Active/Clean
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81780736 unmapped: 5210112 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 5201920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959874 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 5201920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81788928 unmapped: 5201920 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 5193728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81797120 unmapped: 5193728 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81805312 unmapped: 5185536 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959874 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81813504 unmapped: 5177344 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 5169152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 5169152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81821696 unmapped: 5169152 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 5160960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959742 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81829888 unmapped: 5160960 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 5152768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 5144576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 5136384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 5136384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959742 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 5128192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81862656 unmapped: 5128192 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 5120000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d0257d34a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02657e000 session 0x55d0251dd0e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959742 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 5103616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 5103616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 5095424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 5087232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959742 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 5079040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 5079040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca79000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 5070848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 5070848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 5070848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959742 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 32.560096741s of 32.589424133s, submitted: 7
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 5152768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 5152768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81838080 unmapped: 5152768 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 5144576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81846272 unmapped: 5144576 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959782 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 5136384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81854464 unmapped: 5136384 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 5120000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81870848 unmapped: 5120000 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959191 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81879040 unmapped: 5111808 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 5103616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 5103616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81887232 unmapped: 5103616 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959191 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81895424 unmapped: 5095424 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.019577026s of 16.072942734s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81903616 unmapped: 5087232 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 5079040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81911808 unmapped: 5079040 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 5070848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959059 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81920000 unmapped: 5070848 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81928192 unmapped: 5062656 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 5054464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81936384 unmapped: 5054464 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 5046272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959059 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d02592b4a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 5046272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81944576 unmapped: 5046272 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 5029888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81960960 unmapped: 5029888 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 5021696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959059 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 5021696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81969152 unmapped: 5021696 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 5013504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 5013504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81977344 unmapped: 5013504 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959059 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 5005312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.007560730s of 20.012639999s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81985536 unmapped: 5005312 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 4997120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 4997120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 81993728 unmapped: 4997120 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959191 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 4972544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82018304 unmapped: 4972544 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 4964352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82026496 unmapped: 4964352 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 4956160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959191 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 4956160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 4956160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82034688 unmapped: 4956160 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 4947968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82042880 unmapped: 4947968 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.898074150s of 13.905799866s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 4939776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82051072 unmapped: 4939776 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 4923392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 4923392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82067456 unmapped: 4923392 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 4915200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82075648 unmapped: 4915200 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 4907008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82083840 unmapped: 4907008 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 4898816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 4898816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82092032 unmapped: 4898816 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 4890624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82100224 unmapped: 4890624 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 4882432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 4882432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82108416 unmapped: 4882432 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 4874240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 4874240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82116608 unmapped: 4874240 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82124800 unmapped: 4866048 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82132992 unmapped: 4857856 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 4849664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 4849664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023759800 session 0x55d026230f00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d025d85e00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82141184 unmapped: 4849664 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023759c00 session 0x55d02595cd20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ee400 session 0x55d02565e5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 4841472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82149376 unmapped: 4841472 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 4833280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 4833280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82157568 unmapped: 4833280 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 4825088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82165760 unmapped: 4825088 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 4816896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 4816896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82173952 unmapped: 4816896 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958468 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 35.875640869s of 35.879329681s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 4808704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82182144 unmapped: 4808704 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 4800512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82190336 unmapped: 4800512 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 4792320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 958732 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82198528 unmapped: 4792320 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 4784128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 4784128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82206720 unmapped: 4784128 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 4775936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960244 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 4775936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82214912 unmapped: 4775936 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4767744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82223104 unmapped: 4767744 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 4759552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.244351387s of 14.254385948s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960112 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82231296 unmapped: 4759552 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4751360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4751360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82239488 unmapped: 4751360 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 4743168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82247680 unmapped: 4743168 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 4734976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 4734976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82255872 unmapped: 4734976 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 4726784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82264064 unmapped: 4726784 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 4718592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 4718592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 4710400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82272256 unmapped: 4718592 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 4710400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82280448 unmapped: 4710400 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82288640 unmapped: 4702208 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82296832 unmapped: 4694016 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82305024 unmapped: 4685824 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82313216 unmapped: 4677632 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82321408 unmapped: 4669440 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d02592bc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82329600 unmapped: 4661248 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82337792 unmapped: 4653056 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82345984 unmapped: 4644864 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 959980 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82354176 unmapped: 4636672 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 67.801437378s of 67.809219360s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82362368 unmapped: 4628480 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960112 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82370560 unmapped: 4620288 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82378752 unmapped: 4612096 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82386944 unmapped: 4603904 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961624 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82395136 unmapped: 4595712 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82403328 unmapped: 4587520 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 4579328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d026230960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d0257ad860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82411520 unmapped: 4579328 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961033 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82419712 unmapped: 4571136 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 4562944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82427904 unmapped: 4562944 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961033 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.001464844s of 18.014617920s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 4546560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 4546560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82444288 unmapped: 4546560 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 4538368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82452480 unmapped: 4538368 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 960901 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 4530176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82460672 unmapped: 4530176 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 4521984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82468864 unmapped: 4521984 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 4513792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962545 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82477056 unmapped: 4513792 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82485248 unmapped: 4505600 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 4497408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82493440 unmapped: 4497408 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 4489216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 962545 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82501632 unmapped: 4489216 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82509824 unmapped: 4481024 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.475652695s of 17.560913086s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 4472832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82518016 unmapped: 4472832 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 4464640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961954 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 4464640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82526208 unmapped: 4464640 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 4456448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82534400 unmapped: 4456448 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 4448256 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82542592 unmapped: 4448256 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 4440064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82550784 unmapped: 4440064 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 4431872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 4431872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82558976 unmapped: 4431872 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82567168 unmapped: 4423680 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 4415488 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82575360 unmapped: 4415488 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 4399104 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 4399104 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82591744 unmapped: 4399104 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 4390912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82599936 unmapped: 4390912 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 4382720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 4382720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82608128 unmapped: 4382720 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 4374528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82616320 unmapped: 4374528 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 4366336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82624512 unmapped: 4366336 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 4358144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 4358144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82632704 unmapped: 4358144 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 4349952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 4349952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82640896 unmapped: 4349952 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82649088 unmapped: 4341760 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d02642af00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d02642a000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 4333568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 4333568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82657280 unmapped: 4333568 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 4325376 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82665472 unmapped: 4325376 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 4317184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82673664 unmapped: 4317184 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 4308992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 4308992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d0269c2000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d02592bc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82681856 unmapped: 4308992 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 961822 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 48.605010986s of 48.613098145s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82690048 unmapped: 4300800 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82698240 unmapped: 4292608 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963466 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82706432 unmapped: 4284416 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82714624 unmapped: 4276224 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963466 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82722816 unmapped: 4268032 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82731008 unmapped: 4259840 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963598 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.962794304s of 13.972179413s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82739200 unmapped: 4251648 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 82747392 unmapped: 4243456 heap: 86990848 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963796 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 3186688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 3186688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83812352 unmapped: 4227072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963796 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.188635826s of 10.271074295s, submitted: 5
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 4210688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83828736 unmapped: 4210688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 8024 writes, 33K keys, 8024 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 8024 writes, 1544 syncs, 5.20 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8024 writes, 33K keys, 8024 commit groups, 1.0 writes per commit group, ingest: 20.93 MB, 0.03 MB/s#012Interval WAL: 8024 writes, 1544 syncs, 5.20 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 4145152 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 4145152 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83894272 unmapped: 4145152 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963664 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 4136960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d025aa0d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d0257d3c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83902464 unmapped: 4136960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83910656 unmapped: 4128768 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 4120576 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963664 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 4120576 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83918848 unmapped: 4120576 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83927040 unmapped: 4112384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 963664 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.479162216s of 16.484296799s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83935232 unmapped: 4104192 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83943424 unmapped: 4096000 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 965308 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83951616 unmapped: 4087808 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83976192 unmapped: 4063232 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 4055040 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966820 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 4055040 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83984384 unmapped: 4055040 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 4046848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 83992576 unmapped: 4046848 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.077636719s of 12.092287064s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 4038656 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966229 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 4038656 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84000768 unmapped: 4038656 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84008960 unmapped: 4030464 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d0266bf680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023759c00 session 0x55d0269c25a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966097 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84017152 unmapped: 4022272 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84025344 unmapped: 4014080 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966097 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 4005888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84033536 unmapped: 4005888 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84041728 unmapped: 3997696 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84049920 unmapped: 3989504 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966097 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.578451157s of 16.585638046s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84058112 unmapped: 3981312 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84066304 unmapped: 3973120 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d0252765a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02657e000 session 0x55d025246960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84074496 unmapped: 3964928 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966229 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84082688 unmapped: 3956736 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 3948544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 3948544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84090880 unmapped: 3948544 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967741 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84099072 unmapped: 3940352 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3932160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84107264 unmapped: 3932160 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84115456 unmapped: 3923968 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.526356697s of 14.533052444s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967873 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3915776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3915776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84123648 unmapped: 3915776 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 3907584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84131840 unmapped: 3907584 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967150 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 3899392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84140032 unmapped: 3899392 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 3891200 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 3891200 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84148224 unmapped: 3891200 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 967150 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fca7b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x2fdf9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.902188301s of 12.914635658s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84156416 unmapped: 3883008 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d025adb680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ef800 session 0x55d0269c2d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84164608 unmapped: 3874816 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966559 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84189184 unmapped: 3850240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84385792 unmapped: 3653632 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,0,0,0,0,1])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966427 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84484096 unmapped: 3555328 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966427 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241eec00 session 0x55d026231860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d02571f2c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.128757477s of 12.625885010s, submitted: 317
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966559 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966559 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.739382744s of 10.744400978s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84492288 unmapped: 3547136 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 966691 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969583 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969583 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84500480 unmapped: 3538944 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.483121872s of 15.498337746s, submitted: 4
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84508672 unmapped: 3530752 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969451 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84516864 unmapped: 3522560 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84525056 unmapped: 3514368 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969451 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84533248 unmapped: 3506176 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d02565e5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023759c00 session 0x55d024084780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969451 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84541440 unmapped: 3497984 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969451 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84549632 unmapped: 3489792 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969451 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.292909622s of 24.296884537s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [0,0,0,0,0,0,0,0,1])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969583 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971095 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.107984543s of 12.569246292s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969913 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d025d54960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969781 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969781 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.769716263s of 14.783215523s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969913 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969913 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969322 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.095451355s of 17.105062485s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d02642b4a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d0269c2d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.799724579s of 19.803125381s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969322 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.368650436s of 15.730206490s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ee400 session 0x55d0269c3680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d025aa0780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.969473839s of 15.973288536s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972346 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972346 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971755 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.940380096s of 17.952461243s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d02694fc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d02694e5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241eec00 session 0x55d02642a960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 68.266105652s of 68.270225525s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971755 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974911 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974911 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.691431999s of 13.773061752s, submitted: 4
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974188 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 3342336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973597 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.385815620s of 11.396687508s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d024084780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ef800 session 0x55d02694ad20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973465 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973465 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.788563728s of 12.791935921s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973597 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022cbd400 session 0x55d0252c1e00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.101604462s of 14.110105515s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d0257d3c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.731632233s of 25.737331390s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976621 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.129507065s of 15.140318871s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ee400 session 0x55d0269c3680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d026793c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.404827118s of 49.407947540s, submitted: 1
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 3244032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977542 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.359548569s of 12.369632721s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 3227648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 3227648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976951 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread fragmentation_score=0.000029 took=0.000054s
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 3186688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0268c0c00 session 0x55d0245f5680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d025992400 session 0x55d02553dc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.858451843s of 99.866027832s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 3153920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 3153920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978463 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979975 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.084491730s of 12.096027374s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979384 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25784 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8728 writes, 34K keys, 8728 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8728 writes, 1876 syncs, 4.65 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 704 writes, 1104 keys, 704 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 704 writes, 332 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 91.027595520s of 91.035415649s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1,1,0,1])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 3088384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d025277860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d02642b860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d025992400 session 0x55d025d523c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.413551331s of 42.533706665s, submitted: 343
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979516 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981028 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.067756653s of 11.079182625s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979846 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979714 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.885664940s of 10.899056435s, submitted: 4
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xf49d0/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 140 ms_handle_reset con 0x55d025d6a400 session 0x55d02571e1e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020537 data_alloc: 218103808 data_used: 135168
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 11911168 heap: 97353728 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 140 ms_handle_reset con 0x55d026947400 session 0x55d0244a2000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fb57d000/0x0/0x4ffc00000, data 0x11dad20/0x128d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107019 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.782966614s of 43.987041473s, submitted: 44
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 ms_handle_reset con 0x55d022c0b400 session 0x55d025d52960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 ms_handle_reset con 0x55d025992400 session 0x55d0251dc3c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90136576 unmapped: 15613952 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57c000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90136576 unmapped: 15613952 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 15597568 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02599d800 session 0x55d0239ba000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025d6a400 session 0x55d0245f54a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d026946000 session 0x55d0257d2f00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136514 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d022c0b400 session 0x55d025aa0d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025992400 session 0x55d025ada5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02599d800 session 0x55d02642a5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb4ed000/0x0/0x4ffc00000, data 0x1268f1e/0x131e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025d6a400 session 0x55d025d52000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02678bc00 session 0x55d0269c30e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d0268c0c00 session 0x55d02595c960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d0232d9000 session 0x55d026376f00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d022c0b400 session 0x55d0267d10e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 14622720 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138521 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 14598144 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 13803520 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 13803520 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.072201729s of 11.222368240s, submitted: 37
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4ed000/0x0/0x4ffc00000, data 0x1268f2e/0x131f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4e9000/0x0/0x4ffc00000, data 0x126af00/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146375 data_alloc: 218103808 data_used: 5349376
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4e9000/0x0/0x4ffc00000, data 0x126af00/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146507 data_alloc: 218103808 data_used: 5349376
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 13451264 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4faf6e000/0x0/0x4ffc00000, data 0x17e0f00/0x1898000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 10321920 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4faf66000/0x0/0x4ffc00000, data 0x17e6f00/0x189e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195887 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.400364876s of 13.582426071s, submitted: 83
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197399 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189384 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189252 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d02694bc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7800 session 0x55d02694e5a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0257d2000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.670677185s of 16.765102386s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02396e780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 8183808 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d025d554a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025aa01e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97828864 unmapped: 9543680 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7400 session 0x55d02694b860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d02694b2c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233037 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9821000/0x0/0x4ffc00000, data 0x1d92f62/0x1e4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9821000/0x0/0x4ffc00000, data 0x1d92f62/0x1e4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 9469952 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 9469952 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233037 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 9461760 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 9461760 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0252c0b40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267161 data_alloc: 234881024 data_used: 10264576
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101949440 unmapped: 5423104 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101949440 unmapped: 5423104 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267161 data_alloc: 234881024 data_used: 10264576
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 5324800 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.693861008s of 21.804162979s, submitted: 36
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306909 data_alloc: 234881024 data_used: 10526720
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 3915776 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758800 session 0x55d02694ed20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0267961e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318345 data_alloc: 234881024 data_used: 11190272
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.011808395s of 10.120314598s, submitted: 48
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d0239b81e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0245f4780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315489 data_alloc: 234881024 data_used: 11194368
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7938048 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d026230f00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7938048 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195521 data_alloc: 218103808 data_used: 5496832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d02642a3c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d02599d800 session 0x55d026792f00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 7946240 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758800 session 0x55d02642a960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142380 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.356325150s of 14.717723846s, submitted: 57
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141789 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025f07400 session 0x55d02595de00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0267d0000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141657 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0245f5860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d0257d34a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d02599d800 session 0x55d02694fc20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141657 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.380292892s of 11.387769699s, submitted: 2
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d02694f680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0244a3a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0257d3680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d026377680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 8708096 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d0266beb40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0241bb860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0239b85a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0239ba1e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0241ba780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025f07400 session 0x55d02565e1e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0252765a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199859 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d025d54780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0245f43c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025d54960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12230656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12230656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 12509184 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234283 data_alloc: 234881024 data_used: 9641984
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.860521317s of 11.145645142s, submitted: 27
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234567 data_alloc: 234881024 data_used: 9650176
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,2])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 6463488 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d026793680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287077 data_alloc: 234881024 data_used: 10244096
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1e01f00/0x1eb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 6234112 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 5455872 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.006898880s of 10.291774750s, submitted: 88
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9710000/0x0/0x4ffc00000, data 0x1e95f00/0x1f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 5423104 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 5423104 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294045 data_alloc: 234881024 data_used: 10051584
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f971f000/0x0/0x4ffc00000, data 0x1e95f00/0x1f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96fe000/0x0/0x4ffc00000, data 0x1eb6f00/0x1f6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294969 data_alloc: 234881024 data_used: 10051584
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96fe000/0x0/0x4ffc00000, data 0x1eb6f00/0x1f6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.323903084s of 11.456132889s, submitted: 6
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296861 data_alloc: 234881024 data_used: 10051584
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f8000/0x0/0x4ffc00000, data 0x1ebcf00/0x1f74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297870 data_alloc: 234881024 data_used: 10051584
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f5000/0x0/0x4ffc00000, data 0x1ebff00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298174 data_alloc: 234881024 data_used: 10059776
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.626700401s of 11.654762268s, submitted: 7
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d024243a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d0257d34a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 6070272 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f5000/0x0/0x4ffc00000, data 0x1ebff00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d02553de00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 9486336 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d024084780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.700908661s of 24.738805771s, submitted: 16
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0266beb40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179146 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d026993c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026992960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179278 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0251dd0e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 12148736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d0239d4d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101408768 unmapped: 12353536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203991 data_alloc: 218103808 data_used: 7491584
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.616955757s of 11.661013603s, submitted: 10
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202761 data_alloc: 218103808 data_used: 7495680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 7200768 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 7340032 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 7340032 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.128217697s of 23.291627884s, submitted: 63
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a000 session 0x55d0262303c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9115000/0x0/0x4ffc00000, data 0x249ff13/0x2557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0245f45a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9115000/0x0/0x4ffc00000, data 0x249ff13/0x2557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d85860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0266beb40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 15515648 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0266bfe00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d024084780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 15360000 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 15360000 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 11927552 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365834 data_alloc: 234881024 data_used: 12726272
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.282575607s of 15.351916313s, submitted: 21
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365966 data_alloc: 234881024 data_used: 12726272
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110526464 unmapped: 11862016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 11763712 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 10493952 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 11501568 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394822 data_alloc: 234881024 data_used: 13090816
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395895 data_alloc: 234881024 data_used: 13094912
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02595cd20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.433647156s of 12.555562019s, submitted: 42
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0252c1a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d025277860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6c00 session 0x55d025d854a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 11419648 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a400 session 0x55d025adb4a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291666 data_alloc: 218103808 data_used: 7958528
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0239ba000
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025aa1a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d54b40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175042 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.597115517s of 10.756122589s, submitted: 43
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175174 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177906 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177315 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026230960
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6c00 session 0x55d0239bb4a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0266be1e0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0257d3c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.379542351s of 12.390668869s, submitted: 4
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025725a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0244a3680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a800 session 0x55d025aa1e00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d026376d20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026376780
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220087 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026377e00
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0239ced20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0ac00 session 0x55d0257d3a40
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02595c3c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265794 data_alloc: 234881024 data_used: 11087872
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0267972c0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0x17e2f33/0x189c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265794 data_alloc: 234881024 data_used: 11087872
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0x17e2f33/0x189c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.460260391s of 17.508676529s, submitted: 11
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293816 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293948 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295460 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.885341644s of 12.964467049s, submitted: 25
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293942 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293810 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d02595d680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d02396f4a0
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.902921677s of 13.914952278s, submitted: 3
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b400 session 0x55d025aa1c20
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327002 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327002 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0241bb680
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0241bb860
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 11698176 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345374 data_alloc: 234881024 data_used: 13758464
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:43 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.468021393s of 13.508556366s, submitted: 10
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0267974a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d026796000
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346214 data_alloc: 234881024 data_used: 13758464
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 6995968 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 6635520 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d07000/0x0/0x4ffc00000, data 0x28abf33/0x2965000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426818 data_alloc: 234881024 data_used: 14209024
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,1])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428330 data_alloc: 234881024 data_used: 14209024
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.083793640s of 15.273586273s, submitted: 71
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 6397952 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf7000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422887 data_alloc: 234881024 data_used: 14209024
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0251dde00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02694a960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305051 data_alloc: 234881024 data_used: 11091968
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d025d85a40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026377680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.733242989s of 10.052184105s, submitted: 14
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0239d5a40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195886 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195754 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195754 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.950378418s of 16.011283875s, submitted: 18
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d02396e000
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0251dc960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025adb680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0267d0b40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02642bc20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252353 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252353 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 17006592 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288205 data_alloc: 234881024 data_used: 10080256
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.521683693s of 13.599659920s, submitted: 28
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0269c2f00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d026797c20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16419 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.693157196s of 13.755032539s, submitted: 23
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 18169856 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d025724d20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d84780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0269c2780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026230960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02694e780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a19ef0/0x1ad0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0251dc780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270808 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0245f52c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d025adbc20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d025d85e00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d026231a40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0239ba1e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 22872064 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x1a19f13/0x1ad1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 22872064 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 22347776 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321441 data_alloc: 234881024 data_used: 12177408
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x1a19f13/0x1ad1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.020494461s of 11.124375343s, submitted: 30
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d02595cd20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02545b2c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.238668442s of 18.348218918s, submitted: 32
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210005 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d026797680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0266bfc20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2837 syncs, 3.86 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2214 writes, 6930 keys, 2214 commit groups, 1.0 writes per commit group, ingest: 6.65 MB, 0.01 MB/s#012Interval WAL: 2214 writes, 961 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0262310e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0257d34a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0267965a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210005 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0241ba3c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0257d2960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212211 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.750989914s of 14.762865067s, submitted: 3
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212343 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213519 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214127 data_alloc: 218103808 data_used: 4816896
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.393978119s of 14.400218010s, submitted: 2
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 22339584 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c6000/0x0/0x4ffc00000, data 0x1adfef0/0x1b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 22274048 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292579 data_alloc: 218103808 data_used: 5001216
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286731 data_alloc: 218103808 data_used: 5001216
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.739144325s of 13.909265518s, submitted: 66
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287019 data_alloc: 218103808 data_used: 5001216
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02565f2c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965c000/0x0/0x4ffc00000, data 0x1b49ef0/0x1c00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d025d53e00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025d523c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0251dc780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0251dc5a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0239d5a40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.979179382s of 28.011842728s, submitted: 9
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 17498112 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0262303c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0251dd2c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0244a21e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0241832c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0239b85a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96d0000/0x0/0x4ffc00000, data 0x1ad4f00/0x1b8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 25927680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232400 session 0x55d0252770e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d024084960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 25927680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0245f4000
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 25911296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241eec00 session 0x55d026796780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287020 data_alloc: 218103808 data_used: 4796416
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0245f4d20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0245f5860
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0245f43c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25731072 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25731072 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20652032 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20652032 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20643840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356411 data_alloc: 234881024 data_used: 14090240
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20643840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 20594688 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.106419563s of 10.318478584s, submitted: 55
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 21716992 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356195 data_alloc: 234881024 data_used: 14094336
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025aa01e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d025aa10e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 16269312 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0245f50e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d025246d20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6000 session 0x55d0252c0f00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 17440768 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119316480 unmapped: 17334272 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23d4f32/0x248e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430823 data_alloc: 234881024 data_used: 14409728
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23d4f32/0x248e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.081660271s of 11.304382324s, submitted: 383
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17317888 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17317888 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d026230f00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434560 data_alloc: 234881024 data_used: 14409728
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 18194432 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 18194432 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434712 data_alloc: 234881024 data_used: 14413824
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434712 data_alloc: 234881024 data_used: 14413824
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 18169856 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 18169856 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.167691231s of 14.192552567s, submitted: 6
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445462 data_alloc: 234881024 data_used: 14524416
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758c00 session 0x55d02595d0e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445462 data_alloc: 234881024 data_used: 14524416
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.049817085s of 11.065047264s, submitted: 6
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 18038784 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 18030592 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/4031128807' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445766 data_alloc: 234881024 data_used: 14524416
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9b000/0x0/0x4ffc00000, data 0x2405f55/0x24c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 18030592 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02571e960
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0252765a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 18014208 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d02694ba40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 17989632 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dcc000/0x0/0x4ffc00000, data 0x23d5f32/0x248f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433816 data_alloc: 234881024 data_used: 14409728
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d02396f680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0239b0d20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dcd000/0x0/0x4ffc00000, data 0x23d5f32/0x248f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d0267930e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0267921e0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d0267925a0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d026792780
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d026793c20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.304897308s of 31.492788315s, submitted: 55
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 21905408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0239ced20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261627 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261627 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d02595c3c0
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d02595c000
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0251ddc20
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d0239d5a40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263441 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 8470528
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 8470528
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.375757217s of 20.401163101s, submitted: 4
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 19898368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329917 data_alloc: 218103808 data_used: 8667136
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328957 data_alloc: 218103808 data_used: 8667136
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9761000/0x0/0x4ffc00000, data 0x1a43f00/0x1afb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d0241bab40
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d026793680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328957 data_alloc: 218103808 data_used: 8667136
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.079201698s of 15.230097771s, submitted: 55
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: mgrc ms_handle_reset ms_handle_reset con 0x55d02599dc00
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2106645066
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2106645066,v1:192.168.122.100:6801/2106645066]
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: mgrc handle_mgr_configure stats_period=5
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0241bb680
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9761000/0x0/0x4ffc00000, data 0x1a43f00/0x1afb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 22691840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}'
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}'
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}'
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}'
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 22478848 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 22364160 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:16:44 np0005634532 ceph-osd[84309]: do_command 'log dump' '{prefix=log dump}'
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394550438' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Mar  1 05:16:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:44.171 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16425 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25799 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1915852614' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16443 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25909 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25811 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Mar  1 05:16:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1315755056' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Mar  1 05:16:44 np0005634532 nova_compute[257049]: 2026-03-01 10:16:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:44 np0005634532 nova_compute[257049]: 2026-03-01 10:16:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:44 np0005634532 nova_compute[257049]: 2026-03-01 10:16:44.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16455 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25930 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25823 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1028: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:45 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Mar  1 05:16:45 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/508979872' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16470 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:45.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25942 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25838 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16485 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25850 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25856 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:45 np0005634532 nova_compute[257049]: 2026-03-01 10:16:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:45 np0005634532 nova_compute[257049]: 2026-03-01 10:16:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16500 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:46.173 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25963 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25871 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 nova_compute[257049]: 2026-03-01 10:16:46.339 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16509 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Mar  1 05:16:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1288033669' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25886 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25975 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16527 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:46 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Mar  1 05:16:46 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2512276867' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25901 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25990 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1029: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:47.277Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451586685' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1889964252' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Mar  1 05:16:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:47.487 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26002 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.25919 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801696868' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:16:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Mar  1 05:16:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915547257' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Mar  1 05:16:48 np0005634532 kernel: /proc/cgroups lists only v1 controllers, use cgroup.controllers of root cgroup for v2 info
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2434910501' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545475637' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Mar  1 05:16:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:48.175 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/996594090' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Mar  1 05:16:48 np0005634532 podman[279702]: 2026-03-01 10:16:48.542285704 +0000 UTC m=+0.064426613 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2258825016' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Mar  1 05:16:48 np0005634532 nova_compute[257049]: 2026-03-01 10:16:48.815 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 05:16:48 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/587961909' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 05:16:48 np0005634532 nova_compute[257049]: 2026-03-01 10:16:48.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:48 np0005634532 nova_compute[257049]: 2026-03-01 10:16:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:48 np0005634532 nova_compute[257049]: 2026-03-01 10:16:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.001 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:16:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.002 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.002 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.002 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.002 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3889701763' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Mar  1 05:16:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1030: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/684208884' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739711866' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Mar  1 05:16:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:49.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3454616360' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.525 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.658 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.660 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4290MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.660 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.660 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.729 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.729 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.745 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:16:49 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16638 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.830 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.830 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.847 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.880 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Mar  1 05:16:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3159306535' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Mar  1 05:16:49 np0005634532 nova_compute[257049]: 2026-03-01 10:16:49.905 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:16:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:50.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16653 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16674 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26116 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:50 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 05:16:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:16:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/346825972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:16:50 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 05:16:50 np0005634532 nova_compute[257049]: 2026-03-01 10:16:50.509 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:16:50 np0005634532 nova_compute[257049]: 2026-03-01 10:16:50.515 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:16:50 np0005634532 nova_compute[257049]: 2026-03-01 10:16:50.535 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:16:50 np0005634532 nova_compute[257049]: 2026-03-01 10:16:50.537 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:16:50 np0005634532 nova_compute[257049]: 2026-03-01 10:16:50.537 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16689 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26048 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:50 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26140 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26054 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16704 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26060 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3060526446' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1031: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:51 np0005634532 nova_compute[257049]: 2026-03-01 10:16:51.340 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26155 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26066 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26072 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16719 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:51.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:51 np0005634532 podman[280425]: 2026-03-01 10:16:51.532243536 +0000 UTC m=+0.067959969 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Mar  1 05:16:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/897379359' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26167 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26081 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.881767901 +0000 UTC m=+0.051337289 container create e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 05:16:51 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16740 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:51 np0005634532 systemd[1]: Started libpod-conmon-e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2.scope.
Mar  1 05:16:51 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.852085978 +0000 UTC m=+0.021655406 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.954865317 +0000 UTC m=+0.124434735 container init e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.959077551 +0000 UTC m=+0.128646949 container start e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:16:51 np0005634532 systemd[1]: libpod-e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2.scope: Deactivated successfully.
Mar  1 05:16:51 np0005634532 amazing_lederberg[280610]: 167 167
Mar  1 05:16:51 np0005634532 conmon[280610]: conmon e1d001fe2283a2bd5d9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2.scope/container/memory.events
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.963762157 +0000 UTC m=+0.133331555 container attach e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 05:16:51 np0005634532 podman[280573]: 2026-03-01 10:16:51.964017453 +0000 UTC m=+0.133586851 container died e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:16:51 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7110b919bb2c86293f6143565cb19e063a7ffb8810404a4d832cc96957be2d3f-merged.mount: Deactivated successfully.
Mar  1 05:16:52 np0005634532 podman[280573]: 2026-03-01 10:16:52.001483178 +0000 UTC m=+0.171052576 container remove e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_lederberg, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:16:52 np0005634532 systemd[1]: libpod-conmon-e1d001fe2283a2bd5d9c89f68481eb742931f4b4d23db1b956f88bf9b8f275e2.scope: Deactivated successfully.
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26182 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 podman[280655]: 2026-03-01 10:16:52.129960312 +0000 UTC m=+0.036760789 container create 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Mar  1 05:16:52 np0005634532 systemd[1]: Started libpod-conmon-9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41.scope.
Mar  1 05:16:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:52.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3333711845' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26093 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:52 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:52 np0005634532 podman[280655]: 2026-03-01 10:16:52.11406992 +0000 UTC m=+0.020870417 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:52 np0005634532 podman[280655]: 2026-03-01 10:16:52.221813421 +0000 UTC m=+0.128613918 container init 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:16:52 np0005634532 podman[280655]: 2026-03-01 10:16:52.230016504 +0000 UTC m=+0.136816981 container start 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 05:16:52 np0005634532 podman[280655]: 2026-03-01 10:16:52.233351116 +0000 UTC m=+0.140151623 container attach 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16755 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26191 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 objective_greider[280678]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:16:52 np0005634532 objective_greider[280678]: --> All data devices are unavailable
Mar  1 05:16:52 np0005634532 systemd[1]: libpod-9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41.scope: Deactivated successfully.
Mar  1 05:16:52 np0005634532 podman[280858]: 2026-03-01 10:16:52.542398801 +0000 UTC m=+0.020883687 container died 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/993238587' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26108 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 systemd[1]: var-lib-containers-storage-overlay-06d8532c134704afe4cc572e2d4138c8bf8d079d675801637658be0d6be53668-merged.mount: Deactivated successfully.
Mar  1 05:16:52 np0005634532 podman[280858]: 2026-03-01 10:16:52.629055942 +0000 UTC m=+0.107540808 container remove 9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_greider, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 05:16:52 np0005634532 systemd[1]: libpod-conmon-9a096ef63d002d3dc6b6593debb3e3612df157815f408a3ec7dac605608ffc41.scope: Deactivated successfully.
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16767 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26206 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:52 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26120 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.173270536 +0000 UTC m=+0.045146966 container create 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:16:53 np0005634532 systemd[1]: Started libpod-conmon-7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83.scope.
Mar  1 05:16:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1032: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.152932554 +0000 UTC m=+0.024809034 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:53 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26230 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.25884501 +0000 UTC m=+0.130721460 container init 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.264748726 +0000 UTC m=+0.136625156 container start 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 05:16:53 np0005634532 youthful_mclean[281091]: 167 167
Mar  1 05:16:53 np0005634532 systemd[1]: libpod-7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83.scope: Deactivated successfully.
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.269342179 +0000 UTC m=+0.141218629 container attach 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.269705938 +0000 UTC m=+0.141582368 container died 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:16:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay-09f13e800de46faf6c32f42828e0b8bb4f7b633fe99cb9c4df4acfd86b9c4db5-merged.mount: Deactivated successfully.
Mar  1 05:16:53 np0005634532 podman[281061]: 2026-03-01 10:16:53.319482988 +0000 UTC m=+0.191359418 container remove 7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=youthful_mclean, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:16:53 np0005634532 systemd[1]: libpod-conmon-7e629512dbc010bd3fefd51c32acf08a0ab556e74048b3b279ac3b041e39ed83.scope: Deactivated successfully.
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.433226038 +0000 UTC m=+0.039245601 container create 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:16:53 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26138 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:16:53 np0005634532 systemd[1]: Started libpod-conmon-8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b.scope.
Mar  1 05:16:53 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535f37818bfe5906f9fbc0983fb9499e4835166c9e6551a34c233e4b249e0a46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:53.496 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535f37818bfe5906f9fbc0983fb9499e4835166c9e6551a34c233e4b249e0a46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535f37818bfe5906f9fbc0983fb9499e4835166c9e6551a34c233e4b249e0a46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:53 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/535f37818bfe5906f9fbc0983fb9499e4835166c9e6551a34c233e4b249e0a46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.510174329 +0000 UTC m=+0.116193932 container init 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.417214572 +0000 UTC m=+0.023234135 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.516084485 +0000 UTC m=+0.122104058 container start 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.521807546 +0000 UTC m=+0.127827119 container attach 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.538 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.539 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.539 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.555 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.555 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]: {
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:    "0": [
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:        {
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "devices": [
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "/dev/loop3"
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            ],
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "lv_name": "ceph_lv0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "lv_size": "21470642176",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "name": "ceph_lv0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "tags": {
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.cluster_name": "ceph",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.crush_device_class": "",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.encrypted": "0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.osd_id": "0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.type": "block",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.vdo": "0",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:                "ceph.with_tpm": "0"
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            },
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "type": "block",
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:            "vg_name": "ceph_vg0"
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:        }
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]:    ]
Mar  1 05:16:53 np0005634532 agitated_proskuriakova[281168]: }
Mar  1 05:16:53 np0005634532 systemd[1]: libpod-8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b.scope: Deactivated successfully.
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.814116976 +0000 UTC m=+0.420136559 container died 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.818 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2851535326' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Mar  1 05:16:53 np0005634532 systemd[1]: var-lib-containers-storage-overlay-535f37818bfe5906f9fbc0983fb9499e4835166c9e6551a34c233e4b249e0a46-merged.mount: Deactivated successfully.
Mar  1 05:16:53 np0005634532 podman[281140]: 2026-03-01 10:16:53.859315233 +0000 UTC m=+0.465334786 container remove 8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 05:16:53 np0005634532 systemd[1]: libpod-conmon-8428896eba04b453fe11ed35d3844d834f0f9a8bd02a03a0bd7d977231d0649b.scope: Deactivated successfully.
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:16:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:16:53 np0005634532 nova_compute[257049]: 2026-03-01 10:16:53.989 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:16:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:54.182 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:54 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16863 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.372411199 +0000 UTC m=+0.038144774 container create 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 05:16:54 np0005634532 systemd[1]: Started libpod-conmon-511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040.scope.
Mar  1 05:16:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.35465771 +0000 UTC m=+0.020391295 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.452438986 +0000 UTC m=+0.118172571 container init 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.459956872 +0000 UTC m=+0.125690437 container start 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.463323865 +0000 UTC m=+0.129057430 container attach 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:16:54 np0005634532 frosty_mcnulty[281398]: 167 167
Mar  1 05:16:54 np0005634532 systemd[1]: libpod-511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040.scope: Deactivated successfully.
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.467064997 +0000 UTC m=+0.132798572 container died 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:16:54 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b5d2bd28d6ea7d2aadca9eff0d5c90253ddaf97f747f48435b417c5ec6bb02c7-merged.mount: Deactivated successfully.
Mar  1 05:16:54 np0005634532 podman[281378]: 2026-03-01 10:16:54.51209945 +0000 UTC m=+0.177833015 container remove 511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=frosty_mcnulty, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:16:54 np0005634532 systemd[1]: libpod-conmon-511b4221995dbca99a1d37851206c2a71d89e6b1e06015c497547f25cd540040.scope: Deactivated successfully.
Mar  1 05:16:54 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26332 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:54 np0005634532 podman[281441]: 2026-03-01 10:16:54.63920921 +0000 UTC m=+0.045328261 container create 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:16:54 np0005634532 systemd[1]: Started libpod-conmon-57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d.scope.
Mar  1 05:16:54 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:16:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3572c5d6e8b8e67de62011249d21cb8b68efb1e4daa1e341adb2723146d258cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3572c5d6e8b8e67de62011249d21cb8b68efb1e4daa1e341adb2723146d258cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3572c5d6e8b8e67de62011249d21cb8b68efb1e4daa1e341adb2723146d258cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:54 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3572c5d6e8b8e67de62011249d21cb8b68efb1e4daa1e341adb2723146d258cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:16:54 np0005634532 podman[281441]: 2026-03-01 10:16:54.623736408 +0000 UTC m=+0.029855489 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:16:54 np0005634532 podman[281441]: 2026-03-01 10:16:54.731697815 +0000 UTC m=+0.137816896 container init 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:16:54 np0005634532 podman[281441]: 2026-03-01 10:16:54.742581924 +0000 UTC m=+0.148700975 container start 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:16:54 np0005634532 podman[281441]: 2026-03-01 10:16:54.758083967 +0000 UTC m=+0.164203048 container attach 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:16:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Mar  1 05:16:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2513120077' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df"} v 0)
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256384266' entity='client.admin' cmd=[{"prefix": "df"}]: dispatch
Mar  1 05:16:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1033: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26234 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:55 np0005634532 lvm[281599]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:16:55 np0005634532 lvm[281599]: VG ceph_vg0 finished
Mar  1 05:16:55 np0005634532 affectionate_elbakyan[281465]: {}
Mar  1 05:16:55 np0005634532 systemd[1]: libpod-57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d.scope: Deactivated successfully.
Mar  1 05:16:55 np0005634532 podman[281441]: 2026-03-01 10:16:55.437631234 +0000 UTC m=+0.843750315 container died 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 05:16:55 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3572c5d6e8b8e67de62011249d21cb8b68efb1e4daa1e341adb2723146d258cd-merged.mount: Deactivated successfully.
Mar  1 05:16:55 np0005634532 podman[281441]: 2026-03-01 10:16:55.486104892 +0000 UTC m=+0.892223943 container remove 57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_elbakyan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 05:16:55 np0005634532 systemd[1]: libpod-conmon-57e013c9ed29b4d89286a16dff0b1ebae4029a67f3ac64a4dc91aa78b878f72d.scope: Deactivated successfully.
Mar  1 05:16:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:55.499 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump"} v 0)
Mar  1 05:16:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/249895074' entity='client.admin' cmd=[{"prefix": "fs dump"}]: dispatch
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs ls"} v 0)
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/890622109' entity='client.admin' cmd=[{"prefix": "fs ls"}]: dispatch
Mar  1 05:16:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:56.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:56 np0005634532 nova_compute[257049]: 2026-03-01 10:16:56.342 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:16:56 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16911 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:56 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26371 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat"} v 0)
Mar  1 05:16:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/841258963' entity='client.admin' cmd=[{"prefix": "mds stat"}]: dispatch
Mar  1 05:16:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:16:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:16:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:16:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1034: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:16:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:57.279Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:16:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:16:57.281Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:16:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump"} v 0)
Mar  1 05:16:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3428480072' entity='client.admin' cmd=[{"prefix": "mon dump"}]: dispatch
Mar  1 05:16:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:16:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:57.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:16:57 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26267 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:57 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16929 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:57 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26392 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls"} v 0)
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2616361308' entity='client.admin' cmd=[{"prefix": "osd blocklist ls"}]: dispatch
Mar  1 05:16:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:16:58.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975418572' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:16:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1975418572' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:16:58 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16953 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:58 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26416 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:58 np0005634532 nova_compute[257049]: 2026-03-01 10:16:58.820 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:16:58 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16959 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:16:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:16:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:16:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:16:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:16:59 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26297 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:59 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26431 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:16:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1035: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:16:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump"} v 0)
Mar  1 05:16:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344057761' entity='client.admin' cmd=[{"prefix": "osd dump"}]: dispatch
Mar  1 05:16:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:16:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:16:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:16:59.505 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:16:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status"} v 0)
Mar  1 05:16:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437267122' entity='client.admin' cmd=[{"prefix": "osd numa-status"}]: dispatch
Mar  1 05:16:59 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26309 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.16983 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26458 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:00.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26315 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26461 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26467 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:00 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail"} v 0)
Mar  1 05:17:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2791183496' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail"}]: dispatch
Mar  1 05:17:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat"} v 0)
Mar  1 05:17:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1040142114' entity='client.admin' cmd=[{"prefix": "osd stat"}]: dispatch
Mar  1 05:17:01 np0005634532 ovs-appctl[283012]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Mar  1 05:17:01 np0005634532 ovs-appctl[283021]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Mar  1 05:17:01 np0005634532 ovs-appctl[283025]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1036: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:01 np0005634532 nova_compute[257049]: 2026-03-01 10:17:01.343 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26336 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17004 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:01.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26485 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26488 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17013 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26494 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:02.190 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:17:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:17:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status"} v 0)
Mar  1 05:17:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3154569368' entity='client.admin' cmd=[{"prefix": "time-sync-status"}]: dispatch
Mar  1 05:17:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26515 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump", "format": "json-pretty"} v 0)
Mar  1 05:17:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2876491099' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json-pretty"}]: dispatch
Mar  1 05:17:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1037: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26363 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:17:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17043 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:03.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26527 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:03 np0005634532 nova_compute[257049]: 2026-03-01 10:17:03.825 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail", "format": "json-pretty"} v 0)
Mar  1 05:17:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3843860808' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail", "format": "json-pretty"}]: dispatch
Mar  1 05:17:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:04.192 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json-pretty"} v 0)
Mar  1 05:17:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370791824' entity='client.admin' cmd=[{"prefix": "df", "format": "json-pretty"}]: dispatch
Mar  1 05:17:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "fs dump", "format": "json-pretty"} v 0)
Mar  1 05:17:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1740005283' entity='client.admin' cmd=[{"prefix": "fs dump", "format": "json-pretty"}]: dispatch
Mar  1 05:17:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26393 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1038: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:05.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17082 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26575 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mds stat", "format": "json-pretty"} v 0)
Mar  1 05:17:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1106365445' entity='client.admin' cmd=[{"prefix": "mds stat", "format": "json-pretty"}]: dispatch
Mar  1 05:17:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:06.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:06 np0005634532 nova_compute[257049]: 2026-03-01 10:17:06.345 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon dump", "format": "json-pretty"} v 0)
Mar  1 05:17:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3349347491' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json-pretty"}]: dispatch
Mar  1 05:17:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17109 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:17:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json-pretty"} v 0)
Mar  1 05:17:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/773029322' entity='client.admin' cmd=[{"prefix": "osd blocklist ls", "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26599 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1039: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:07.281Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:17:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:07.282Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26429 -' entity='client.admin' cmd=[{"prefix": "fs status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17124 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:17:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:07.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17130 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26617 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd dump", "format": "json-pretty"} v 0)
Mar  1 05:17:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/127709129' entity='client.admin' cmd=[{"prefix": "osd dump", "format": "json-pretty"}]: dispatch
Mar  1 05:17:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:08.197 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26626 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd numa-status", "format": "json-pretty"} v 0)
Mar  1 05:17:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3808972131' entity='client.admin' cmd=[{"prefix": "osd numa-status", "format": "json-pretty"}]: dispatch
Mar  1 05:17:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26456 -' entity='client.admin' cmd=[{"prefix": "osd blocked-by", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:08 np0005634532 nova_compute[257049]: 2026-03-01 10:17:08.829 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26641 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17151 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1040: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26477 -' entity='client.admin' cmd=[{"prefix": "osd df", "output_method": "tree", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26656 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:09.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"} v 0)
Mar  1 05:17:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1127520001' entity='client.admin' cmd=[{"prefix": "osd pool ls", "detail": "detail", "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26483 -' entity='client.admin' cmd=[{"prefix": "osd df", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26665 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:09 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd stat", "format": "json-pretty"} v 0)
Mar  1 05:17:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815902286' entity='client.admin' cmd=[{"prefix": "osd stat", "format": "json-pretty"}]: dispatch
Mar  1 05:17:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:10.199 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17169 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:10 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Mar  1 05:17:10 np0005634532 systemd[1]: Starting Time & Date Service...
Mar  1 05:17:10 np0005634532 systemd[1]: Started Time & Date Service.
Mar  1 05:17:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17178 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26689 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26504 -' entity='client.admin' cmd=[{"prefix": "osd perf", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0)
Mar  1 05:17:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2578015030' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1041: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26695 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:11 np0005634532 nova_compute[257049]: 2026-03-01 10:17:11.347 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26510 -' entity='client.admin' cmd=[{"prefix": "osd pool autoscale-status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:11 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:11.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Mar  1 05:17:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177686299' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Mar  1 05:17:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:12.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26528 -' entity='client.admin' cmd=[{"prefix": "pg dump", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26534 -' entity='client.admin' cmd=[{"prefix": "pg stat", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:17:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1042: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:13.526 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:13 np0005634532 nova_compute[257049]: 2026-03-01 10:17:13.867 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "time-sync-status", "format": "json-pretty"} v 0)
Mar  1 05:17:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.101:0/213946132' entity='client.admin' cmd=[{"prefix": "time-sync-status", "format": "json-pretty"}]: dispatch
Mar  1 05:17:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000049s ======
Mar  1 05:17:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:14.203 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Mar  1 05:17:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1043: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:15.529 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:16.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:16 np0005634532 nova_compute[257049]: 2026-03-01 10:17:16.350 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:17:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1044: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:17.283Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:17:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:17.533 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:17:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:17:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=infra.usagestats t=2026-03-01T10:17:17.58987358Z level=info msg="Usage stats are ready to report"
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:17:17
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'default.rgw.log', 'vms', 'images', '.nfs', 'backups', '.mgr', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:17:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:17:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:18.208 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:18 np0005634532 nova_compute[257049]: 2026-03-01 10:17:18.870 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:18 np0005634532 podman[285369]: 2026-03-01 10:17:18.9152035 +0000 UTC m=+0.095632487 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Mar  1 05:17:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1045: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:19.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:17:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:17:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:20.210 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1046: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:21 np0005634532 nova_compute[257049]: 2026-03-01 10:17:21.354 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:21.539 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:22.212 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:22 np0005634532 podman[285400]: 2026-03-01 10:17:22.347650702 +0000 UTC m=+0.041590872 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:17:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1047: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:17:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:23.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:17:23 np0005634532 nova_compute[257049]: 2026-03-01 10:17:23.873 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:17:23.891 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:17:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:17:23.891 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:17:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:17:23.892 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:17:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:24.214 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1048: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:17:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:25.543 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:17:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:26.217 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:26 np0005634532 nova_compute[257049]: 2026-03-01 10:17:26.356 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1049: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:27.284Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:17:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:27.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:28.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:28 np0005634532 nova_compute[257049]: 2026-03-01 10:17:28.876 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1050: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:29.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:30.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1051: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:31 np0005634532 nova_compute[257049]: 2026-03-01 10:17:31.359 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:17:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:31.554 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:17:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:32.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:17:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:17:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1052: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:33.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:33 np0005634532 nova_compute[257049]: 2026-03-01 10:17:33.880 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:34.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1053: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:35.560 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:36.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:36 np0005634532 nova_compute[257049]: 2026-03-01 10:17:36.362 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1054: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:37.289Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:17:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:37.290Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:17:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:37.562 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:38.230 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:38 np0005634532 nova_compute[257049]: 2026-03-01 10:17:38.882 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1055: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:39.565 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:40.232 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:40 np0005634532 systemd[1]: systemd-timedated.service: Deactivated successfully.
Mar  1 05:17:40 np0005634532 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Mar  1 05:17:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1056: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:41 np0005634532 nova_compute[257049]: 2026-03-01 10:17:41.367 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:41.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:42.234 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1057: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:43.571 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:43 np0005634532 nova_compute[257049]: 2026-03-01 10:17:43.924 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:44.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:44 np0005634532 nova_compute[257049]: 2026-03-01 10:17:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1058: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:17:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:45.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:17:45 np0005634532 nova_compute[257049]: 2026-03-01 10:17:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:45 np0005634532 nova_compute[257049]: 2026-03-01 10:17:45.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:46.239 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:46 np0005634532 nova_compute[257049]: 2026-03-01 10:17:46.370 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:46 np0005634532 nova_compute[257049]: 2026-03-01 10:17:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:46 np0005634532 nova_compute[257049]: 2026-03-01 10:17:46.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:17:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:17:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1059: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:47.291Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:17:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:47.291Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:17:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:47.292Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:17:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:17:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:17:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:47.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:17:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:17:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:48.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:48 np0005634532 nova_compute[257049]: 2026-03-01 10:17:48.926 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:48 np0005634532 nova_compute[257049]: 2026-03-01 10:17:48.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.058 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.059 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.059 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.060 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.060 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:17:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1060: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:49 np0005634532 podman[285495]: 2026-03-01 10:17:49.400597519 +0000 UTC m=+0.087224050 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller)
Mar  1 05:17:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:17:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2317884311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.510 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:17:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:49.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.637 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.638 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4299MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.638 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.639 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.702 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.703 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:17:49 np0005634532 nova_compute[257049]: 2026-03-01 10:17:49.724 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:17:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:17:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2593566751' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:17:50 np0005634532 nova_compute[257049]: 2026-03-01 10:17:50.136 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:17:50 np0005634532 nova_compute[257049]: 2026-03-01 10:17:50.144 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:17:50 np0005634532 nova_compute[257049]: 2026-03-01 10:17:50.159 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:17:50 np0005634532 nova_compute[257049]: 2026-03-01 10:17:50.160 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:17:50 np0005634532 nova_compute[257049]: 2026-03-01 10:17:50.161 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.522s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:17:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:50.243 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1061: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:51 np0005634532 nova_compute[257049]: 2026-03-01 10:17:51.374 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:17:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:51.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:17:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:52 np0005634532 nova_compute[257049]: 2026-03-01 10:17:52.158 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:52 np0005634532 nova_compute[257049]: 2026-03-01 10:17:52.159 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:52.245 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:52 np0005634532 nova_compute[257049]: 2026-03-01 10:17:52.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:52 np0005634532 nova_compute[257049]: 2026-03-01 10:17:52.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:17:52 np0005634532 nova_compute[257049]: 2026-03-01 10:17:52.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:17:53 np0005634532 nova_compute[257049]: 2026-03-01 10:17:53.103 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:17:53 np0005634532 nova_compute[257049]: 2026-03-01 10:17:53.106 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:17:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1062: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:53 np0005634532 podman[285550]: 2026-03-01 10:17:53.358876189 +0000 UTC m=+0.052508839 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2)
Mar  1 05:17:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:53.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:53 np0005634532 nova_compute[257049]: 2026-03-01 10:17:53.954 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:54.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1063: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:17:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:55.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:56.249 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:56 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0)
Mar  1 05:17:56 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:56 np0005634532 nova_compute[257049]: 2026-03-01 10:17:56.427 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:56 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:17:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:17:57] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:17:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:17:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1064: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:17:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:17:57.293Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:17:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:57.594 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077001697' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2077001697' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:17:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:17:58.251 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:58 np0005634532 nova_compute[257049]: 2026-03-01 10:17:58.989 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:17:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:17:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:17:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:17:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:17:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1065: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 772 B/s rd, 0 op/s
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:17:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:17:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:17:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:17:59.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.673561453 +0000 UTC m=+0.047954518 container create f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-2", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config rm", "who": "osd/host:compute-1", "name": "osd_memory_target"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:17:59 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:17:59 np0005634532 systemd[1]: Started libpod-conmon-f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8.scope.
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.653439539 +0000 UTC m=+0.027832614 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:17:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.7672284 +0000 UTC m=+0.141621445 container init f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.780287141 +0000 UTC m=+0.154680186 container start f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.783977701 +0000 UTC m=+0.158370776 container attach f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:17:59 np0005634532 vibrant_noether[285788]: 167 167
Mar  1 05:17:59 np0005634532 systemd[1]: libpod-f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8.scope: Deactivated successfully.
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.786165125 +0000 UTC m=+0.160558170 container died f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 05:17:59 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a213705c3c2310d7d9d8db434530d7ed0458ebe13d707a1e290896512299d866-merged.mount: Deactivated successfully.
Mar  1 05:17:59 np0005634532 podman[285772]: 2026-03-01 10:17:59.824802203 +0000 UTC m=+0.199195278 container remove f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vibrant_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:17:59 np0005634532 systemd[1]: libpod-conmon-f50219bc3d751c87ea82890832d6754d7c6d1c58871a780d290f4777ae5058e8.scope: Deactivated successfully.
Mar  1 05:17:59 np0005634532 podman[285811]: 2026-03-01 10:17:59.93762184 +0000 UTC m=+0.035896551 container create f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1)
Mar  1 05:17:59 np0005634532 systemd[1]: Started libpod-conmon-f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb.scope.
Mar  1 05:17:59 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:17:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:17:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:17:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:17:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:17:59 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:17:59 np0005634532 podman[285811]: 2026-03-01 10:17:59.995938871 +0000 UTC m=+0.094213542 container init f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:18:00 np0005634532 podman[285811]: 2026-03-01 10:18:00.00160139 +0000 UTC m=+0.099876061 container start f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:18:00 np0005634532 podman[285811]: 2026-03-01 10:17:59.920636774 +0000 UTC m=+0.018911465 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:18:00 np0005634532 podman[285811]: 2026-03-01 10:18:00.025622959 +0000 UTC m=+0.123897650 container attach f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 05:18:00 np0005634532 gallant_mccarthy[285827]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:18:00 np0005634532 gallant_mccarthy[285827]: --> All data devices are unavailable
Mar  1 05:18:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:00.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:00 np0005634532 systemd[1]: libpod-f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb.scope: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285811]: 2026-03-01 10:18:00.273198612 +0000 UTC m=+0.371473283 container died f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 05:18:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e357325a58844893b2d6c53d959961d7126dff9ec89d00dd95b17217d0cd93b2-merged.mount: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285811]: 2026-03-01 10:18:00.31304838 +0000 UTC m=+0.411323051 container remove f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=gallant_mccarthy, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:18:00 np0005634532 systemd[1]: libpod-conmon-f0d8b4bcdbae5237f60870108e58630539782fdfebe5faf9f476a9c8964c7aeb.scope: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.746963434 +0000 UTC m=+0.032456427 container create fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:18:00 np0005634532 ceph-mon[75825]: Health check failed: 2 failed cephadm daemon(s) (CEPHADM_FAILED_DAEMON)
Mar  1 05:18:00 np0005634532 systemd[1]: Started libpod-conmon-fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497.scope.
Mar  1 05:18:00 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.797141395 +0000 UTC m=+0.082634418 container init fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.802951168 +0000 UTC m=+0.088444161 container start fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:18:00 np0005634532 sweet_austin[285962]: 167 167
Mar  1 05:18:00 np0005634532 systemd[1]: libpod-fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497.scope: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.808416282 +0000 UTC m=+0.093909285 container attach fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.809506598 +0000 UTC m=+0.094999591 container died fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.733988186 +0000 UTC m=+0.019481199 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:18:00 np0005634532 systemd[1]: var-lib-containers-storage-overlay-a5cbed7f6a26f35d954d3137beabaf049a1aa28b43963311fb304eec54f5338b-merged.mount: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285946]: 2026-03-01 10:18:00.843954553 +0000 UTC m=+0.129447546 container remove fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_austin, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325)
Mar  1 05:18:00 np0005634532 systemd[1]: libpod-conmon-fdcfeb7d609a8d36de8227a0d895799933a9fc3d8fe84cace1e253a87c33f497.scope: Deactivated successfully.
Mar  1 05:18:00 np0005634532 podman[285986]: 2026-03-01 10:18:00.959408496 +0000 UTC m=+0.034886687 container create 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1)
Mar  1 05:18:00 np0005634532 systemd[1]: Started libpod-conmon-9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30.scope.
Mar  1 05:18:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:18:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda8a9fdb0d9a8cc1d31ae11179b3196e77e388f2aa9f0dc1f31de09e2020543/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda8a9fdb0d9a8cc1d31ae11179b3196e77e388f2aa9f0dc1f31de09e2020543/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda8a9fdb0d9a8cc1d31ae11179b3196e77e388f2aa9f0dc1f31de09e2020543/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:01 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda8a9fdb0d9a8cc1d31ae11179b3196e77e388f2aa9f0dc1f31de09e2020543/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:01.039586712 +0000 UTC m=+0.115064923 container init 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:00.94288155 +0000 UTC m=+0.018359751 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:01.04397908 +0000 UTC m=+0.119457291 container start 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:01.047188629 +0000 UTC m=+0.122666840 container attach 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 05:18:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1066: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 514 B/s rd, 0 op/s
Mar  1 05:18:01 np0005634532 awesome_saha[286002]: {
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:    "0": [
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:        {
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "devices": [
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "/dev/loop3"
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            ],
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "lv_name": "ceph_lv0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "lv_size": "21470642176",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "name": "ceph_lv0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "tags": {
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.cluster_name": "ceph",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.crush_device_class": "",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.encrypted": "0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.osd_id": "0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.type": "block",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.vdo": "0",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:                "ceph.with_tpm": "0"
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            },
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "type": "block",
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:            "vg_name": "ceph_vg0"
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:        }
Mar  1 05:18:01 np0005634532 awesome_saha[286002]:    ]
Mar  1 05:18:01 np0005634532 awesome_saha[286002]: }
Mar  1 05:18:01 np0005634532 systemd[1]: libpod-9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30.scope: Deactivated successfully.
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:01.350249653 +0000 UTC m=+0.425727844 container died 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid)
Mar  1 05:18:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-cda8a9fdb0d9a8cc1d31ae11179b3196e77e388f2aa9f0dc1f31de09e2020543-merged.mount: Deactivated successfully.
Mar  1 05:18:01 np0005634532 podman[285986]: 2026-03-01 10:18:01.392037978 +0000 UTC m=+0.467516169 container remove 9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:18:01 np0005634532 systemd[1]: libpod-conmon-9caab90c0bfad843b4d4f51646d7b431f93dcf9a16eaeaae90c4173053221f30.scope: Deactivated successfully.
Mar  1 05:18:01 np0005634532 nova_compute[257049]: 2026-03-01 10:18:01.461 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:01.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.894182766 +0000 UTC m=+0.029064134 container create 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:18:01 np0005634532 systemd[1]: Started libpod-conmon-1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0.scope.
Mar  1 05:18:01 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.940916792 +0000 UTC m=+0.075798180 container init 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.944780617 +0000 UTC m=+0.079661985 container start 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:18:01 np0005634532 peaceful_kapitsa[286136]: 167 167
Mar  1 05:18:01 np0005634532 systemd[1]: libpod-1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0.scope: Deactivated successfully.
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.949216216 +0000 UTC m=+0.084097614 container attach 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.949546834 +0000 UTC m=+0.084428202 container died 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:18:01 np0005634532 systemd[1]: var-lib-containers-storage-overlay-726a59955bb2df8a16d85fc2d846872d7c0c13fd6214c537ff8a92be331ec771-merged.mount: Deactivated successfully.
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.881774791 +0000 UTC m=+0.016656189 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:18:01 np0005634532 podman[286120]: 2026-03-01 10:18:01.985036644 +0000 UTC m=+0.119918022 container remove 1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=peaceful_kapitsa, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:18:01 np0005634532 systemd[1]: libpod-conmon-1a8222331c7b41b2bc5154ad539edb6bb879c3b87a0eac0d053777fecc1da4d0.scope: Deactivated successfully.
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.111783164 +0000 UTC m=+0.038350052 container create 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 05:18:02 np0005634532 systemd[1]: Started libpod-conmon-665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09.scope.
Mar  1 05:18:02 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:18:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/093dcfe1cbc6efa82ff630df6c4a629c95078e6763d1479c33c7cc69ae2a9b6b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/093dcfe1cbc6efa82ff630df6c4a629c95078e6763d1479c33c7cc69ae2a9b6b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/093dcfe1cbc6efa82ff630df6c4a629c95078e6763d1479c33c7cc69ae2a9b6b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:02 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/093dcfe1cbc6efa82ff630df6c4a629c95078e6763d1479c33c7cc69ae2a9b6b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.095400272 +0000 UTC m=+0.021967190 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.20293317 +0000 UTC m=+0.129500148 container init 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.209905031 +0000 UTC m=+0.136471919 container start 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.215417056 +0000 UTC m=+0.141984034 container attach 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:18:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:18:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:02.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/prometheus/health_history}] v 0)
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:18:02 np0005634532 lvm[286251]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:18:02 np0005634532 lvm[286251]: VG ceph_vg0 finished
Mar  1 05:18:02 np0005634532 romantic_saha[286175]: {}
Mar  1 05:18:02 np0005634532 systemd[1]: libpod-665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09.scope: Deactivated successfully.
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.869885911 +0000 UTC m=+0.796452809 container died 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:18:02 np0005634532 systemd[1]: var-lib-containers-storage-overlay-093dcfe1cbc6efa82ff630df6c4a629c95078e6763d1479c33c7cc69ae2a9b6b-merged.mount: Deactivated successfully.
Mar  1 05:18:02 np0005634532 podman[286158]: 2026-03-01 10:18:02.933395928 +0000 UTC m=+0.859962826 container remove 665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=romantic_saha, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:18:02 np0005634532 systemd[1]: libpod-conmon-665a3dfb50ddc0311a0ee7f87afab1e924b95e7312b71b7d84b5335d9e108b09.scope: Deactivated successfully.
Mar  1 05:18:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1067: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 514 B/s rd, 0 op/s
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:03 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:18:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:03.602 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:04 np0005634532 nova_compute[257049]: 2026-03-01 10:18:04.031 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:04.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:05 np0005634532 systemd[1]: session-56.scope: Deactivated successfully.
Mar  1 05:18:05 np0005634532 systemd[1]: session-56.scope: Consumed 2min 49.192s CPU time, 975.2M memory peak, read 485.0M from disk, written 79.3M to disk.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Session 56 logged out. Waiting for processes to exit.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Removed session 56.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: New session 57 of user zuul.
Mar  1 05:18:05 np0005634532 systemd[1]: Started Session 57 of User zuul.
Mar  1 05:18:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1068: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 772 B/s rd, 0 op/s
Mar  1 05:18:05 np0005634532 systemd[1]: session-57.scope: Deactivated successfully.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Session 57 logged out. Waiting for processes to exit.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Removed session 57.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: New session 58 of user zuul.
Mar  1 05:18:05 np0005634532 systemd[1]: Started Session 58 of User zuul.
Mar  1 05:18:05 np0005634532 systemd[1]: session-58.scope: Deactivated successfully.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Session 58 logged out. Waiting for processes to exit.
Mar  1 05:18:05 np0005634532 systemd-logind[832]: Removed session 58.
Mar  1 05:18:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:18:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:05.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:18:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:06.260 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:06 np0005634532 nova_compute[257049]: 2026-03-01 10:18:06.509 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:18:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:18:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1069: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 514 B/s rd, 0 op/s
Mar  1 05:18:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:07.294Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:18:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:07.610 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:08.262 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:09 np0005634532 nova_compute[257049]: 2026-03-01 10:18:09.034 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1070: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 771 B/s rd, 0 op/s
Mar  1 05:18:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:09.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:18:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:10.264 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:18:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1071: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:11 np0005634532 nova_compute[257049]: 2026-03-01 10:18:11.554 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:11.616 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:12.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1072: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:13.619 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:14 np0005634532 nova_compute[257049]: 2026-03-01 10:18:14.037 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:14.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1073: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:15.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:16.270 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:16 np0005634532 nova_compute[257049]: 2026-03-01 10:18:16.890 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:18:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1074: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:17.295Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:18:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:17.295Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:17.295Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:18:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:18:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:18:17
Mar  1 05:18:17 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:18:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'volumes', 'default.rgw.control', '.rgw.root', 'images', 'vms', 'backups', 'cephfs.cephfs.data', '.nfs', 'default.rgw.log']
Mar  1 05:18:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:18.138 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:18:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:18:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:18.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:19 np0005634532 nova_compute[257049]: 2026-03-01 10:18:19.041 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1075: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:18:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:18:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:18:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:20.140 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:18:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:20.274 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:20 np0005634532 podman[286392]: 2026-03-01 10:18:20.382627089 +0000 UTC m=+0.075444082 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible)
Mar  1 05:18:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1076: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:21 np0005634532 nova_compute[257049]: 2026-03-01 10:18:21.887 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:22.143 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:22.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1077: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:18:23.892 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:18:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:18:23.893 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:18:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:18:23.893 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:18:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:24 np0005634532 nova_compute[257049]: 2026-03-01 10:18:24.088 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:24.146 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:24.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:24 np0005634532 podman[286422]: 2026-03-01 10:18:24.376847588 +0000 UTC m=+0.065666772 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 05:18:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1078: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:26.148 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:26.280 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:26 np0005634532 nova_compute[257049]: 2026-03-01 10:18:26.889 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:18:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:18:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1079: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:27.296Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:27.296Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:18:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:27.296Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:18:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:28.150 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:28.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:29 np0005634532 nova_compute[257049]: 2026-03-01 10:18:29.092 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1080: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:30.154 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:30.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1081: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:31 np0005634532 nova_compute[257049]: 2026-03-01 10:18:31.890 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:32.157 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:32.285 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:18:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:18:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1082: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:34 np0005634532 nova_compute[257049]: 2026-03-01 10:18:34.136 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:34.160 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:34.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1083: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:36.163 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:36.289 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:36 np0005634532 nova_compute[257049]: 2026-03-01 10:18:36.892 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:18:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:37] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:18:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1084: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:37.296Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:18:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:37.296Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:37.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:38.165 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:38.291 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:39 np0005634532 nova_compute[257049]: 2026-03-01 10:18:39.186 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1085: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:40.168 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:40.293 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1086: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:41 np0005634532 nova_compute[257049]: 2026-03-01 10:18:41.892 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:42.170 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:42.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1087: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:44.174 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:44 np0005634532 nova_compute[257049]: 2026-03-01 10:18:44.189 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:44.296 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:44 np0005634532 nova_compute[257049]: 2026-03-01 10:18:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1088: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:45 np0005634532 nova_compute[257049]: 2026-03-01 10:18:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:45 np0005634532 nova_compute[257049]: 2026-03-01 10:18:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:46.177 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:46.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:46 np0005634532 nova_compute[257049]: 2026-03-01 10:18:46.894 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:47] "GET /metrics HTTP/1.1" 200 48460 "" "Prometheus/2.51.0"
Mar  1 05:18:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1089: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:47.297Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:18:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:18:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:18:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:18:47 np0005634532 nova_compute[257049]: 2026-03-01 10:18:47.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:47 np0005634532 nova_compute[257049]: 2026-03-01 10:18:47.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:18:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:48.180 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:48.300 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:48 np0005634532 ceph-mgr[76134]: [dashboard INFO request] [192.168.122.100:41768] [POST] [200] [0.002s] [4.0B] [fe75d660-3a34-4949-9c14-8ca0ea6881fc] /api/prometheus_receiver
Mar  1 05:18:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1090: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:49 np0005634532 nova_compute[257049]: 2026-03-01 10:18:49.235 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:49 np0005634532 nova_compute[257049]: 2026-03-01 10:18:49.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.028 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.029 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.029 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.029 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.030 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:18:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:50.184 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:50.302 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:50 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:18:50 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2461020979' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.469 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.619 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.620 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4484MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.621 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.621 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.788 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.789 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:18:50 np0005634532 nova_compute[257049]: 2026-03-01 10:18:50.807 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:18:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1091: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:18:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605789336' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.271 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.278 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.301 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.303 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.303 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:18:51 np0005634532 podman[286538]: 2026-03-01 10:18:51.377674579 +0000 UTC m=+0.059835679 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:18:51 np0005634532 nova_compute[257049]: 2026-03-01 10:18:51.896 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:52.186 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:52 np0005634532 nova_compute[257049]: 2026-03-01 10:18:52.303 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:52.304 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:52 np0005634532 nova_compute[257049]: 2026-03-01 10:18:52.971 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1092: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:18:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:54.188 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.269 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:18:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:54.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.997 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:18:54 np0005634532 nova_compute[257049]: 2026-03-01 10:18:54.997 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1093: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:55 np0005634532 podman[286569]: 2026-03-01 10:18:55.35065247 +0000 UTC m=+0.045232480 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Mar  1 05:18:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:56.191 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:56.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:56 np0005634532 nova_compute[257049]: 2026-03-01 10:18:56.899 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:18:56 np0005634532 nova_compute[257049]: 2026-03-01 10:18:56.993 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:18:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:18:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:18:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:18:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:18:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1094: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:18:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:57.298Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:18:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:18:58.193 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:18:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452720977' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:18:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:18:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1452720977' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:18:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:18:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:18:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:18:58.308 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:18:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:58.845Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:18:58.845Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:18:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:18:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:18:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:18:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:18:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:18:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1095: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:18:59 np0005634532 nova_compute[257049]: 2026-03-01 10:18:59.271 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:00.195 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:00.309 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1096: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:01 np0005634532 nova_compute[257049]: 2026-03-01 10:19:01.900 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:02.198 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:02.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:19:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:19:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1097: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:19:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1098: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 858 B/s rd, 0 op/s
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:19:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:19:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:04.201 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:04.313 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:04 np0005634532 nova_compute[257049]: 2026-03-01 10:19:04.322 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.503798153 +0000 UTC m=+0.034955548 container create 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 05:19:04 np0005634532 systemd[1]: Started libpod-conmon-534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227.scope.
Mar  1 05:19:04 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.571282969 +0000 UTC m=+0.102440414 container init 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.575936993 +0000 UTC m=+0.107094378 container start 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.578894095 +0000 UTC m=+0.110051540 container attach 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:19:04 np0005634532 upbeat_edison[286816]: 167 167
Mar  1 05:19:04 np0005634532 systemd[1]: libpod-534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227.scope: Deactivated successfully.
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.580594897 +0000 UTC m=+0.111752292 container died 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.490282612 +0000 UTC m=+0.021440027 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:04 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9e8de770219fb058fd84ceceb20d6d0795016ae957f58ceb128ed8e7f2c0b0d0-merged.mount: Deactivated successfully.
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:04 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:19:04 np0005634532 podman[286799]: 2026-03-01 10:19:04.627563399 +0000 UTC m=+0.158720794 container remove 534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_edison, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:19:04 np0005634532 systemd[1]: libpod-conmon-534c10409318f8d0eafae7d4648648b2032ea576984d076a1801e7caf8341227.scope: Deactivated successfully.
Mar  1 05:19:04 np0005634532 podman[286840]: 2026-03-01 10:19:04.750080885 +0000 UTC m=+0.037200394 container create f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 05:19:04 np0005634532 systemd[1]: Started libpod-conmon-f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438.scope.
Mar  1 05:19:04 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:04 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:04 np0005634532 podman[286840]: 2026-03-01 10:19:04.736584974 +0000 UTC m=+0.023704503 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:04 np0005634532 podman[286840]: 2026-03-01 10:19:04.843606439 +0000 UTC m=+0.130725948 container init f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:19:04 np0005634532 podman[286840]: 2026-03-01 10:19:04.854095556 +0000 UTC m=+0.141215075 container start f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Mar  1 05:19:04 np0005634532 podman[286840]: 2026-03-01 10:19:04.857258184 +0000 UTC m=+0.144377853 container attach f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:19:05 np0005634532 vigorous_dubinsky[286857]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:19:05 np0005634532 vigorous_dubinsky[286857]: --> All data devices are unavailable
Mar  1 05:19:05 np0005634532 systemd[1]: libpod-f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438.scope: Deactivated successfully.
Mar  1 05:19:05 np0005634532 podman[286840]: 2026-03-01 10:19:05.18037025 +0000 UTC m=+0.467489839 container died f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Mar  1 05:19:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8a6c20b0b4ad249d9948ebce2bff3363bce5a7d883f896ccfd03709904e9c10d-merged.mount: Deactivated successfully.
Mar  1 05:19:05 np0005634532 podman[286840]: 2026-03-01 10:19:05.227978638 +0000 UTC m=+0.515098147 container remove f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=vigorous_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 05:19:05 np0005634532 systemd[1]: libpod-conmon-f5d8efe28a47eec2e4e9e113b951c1ae79fe80fe4956b8f1837cb4a44f668438.scope: Deactivated successfully.
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.776555865 +0000 UTC m=+0.050925570 container create 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 05:19:05 np0005634532 systemd[1]: Started libpod-conmon-773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12.scope.
Mar  1 05:19:05 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.838651128 +0000 UTC m=+0.113020853 container init 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1)
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.844365359 +0000 UTC m=+0.118735074 container start 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.847579347 +0000 UTC m=+0.121949112 container attach 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:19:05 np0005634532 sharp_merkle[286989]: 167 167
Mar  1 05:19:05 np0005634532 systemd[1]: libpod-773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12.scope: Deactivated successfully.
Mar  1 05:19:05 np0005634532 conmon[286989]: conmon 773393f441b453f752dc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12.scope/container/memory.events
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.851597326 +0000 UTC m=+0.125967061 container died 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.760137152 +0000 UTC m=+0.034506877 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:05 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3a58e4ea517a8ba685eb9d6f4a954702269fc59eae142f5fffa4c7e564e2311d-merged.mount: Deactivated successfully.
Mar  1 05:19:05 np0005634532 podman[286973]: 2026-03-01 10:19:05.896109488 +0000 UTC m=+0.170479213 container remove 773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_merkle, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:19:05 np0005634532 systemd[1]: libpod-conmon-773393f441b453f752dcd393f0f193591a15efaec73a9af4b3a92a95f890da12.scope: Deactivated successfully.
Mar  1 05:19:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1099: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 572 B/s rd, 0 op/s
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.06006364 +0000 UTC m=+0.061855989 container create 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 05:19:06 np0005634532 systemd[1]: Started libpod-conmon-8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c.scope.
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.031680024 +0000 UTC m=+0.033472463 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:06 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6137b85f622b01aa003d9849ec9fca31ce787bc2dc31421e8fd2d9fb95bfdf47/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6137b85f622b01aa003d9849ec9fca31ce787bc2dc31421e8fd2d9fb95bfdf47/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6137b85f622b01aa003d9849ec9fca31ce787bc2dc31421e8fd2d9fb95bfdf47/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:06 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6137b85f622b01aa003d9849ec9fca31ce787bc2dc31421e8fd2d9fb95bfdf47/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.156099855 +0000 UTC m=+0.157892234 container init 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.163310242 +0000 UTC m=+0.165102621 container start 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid)
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.166749486 +0000 UTC m=+0.168541835 container attach 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2)
Mar  1 05:19:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:06.204 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:06.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]: {
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:    "0": [
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:        {
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "devices": [
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "/dev/loop3"
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            ],
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "lv_name": "ceph_lv0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "lv_size": "21470642176",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "name": "ceph_lv0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "tags": {
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.cluster_name": "ceph",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.crush_device_class": "",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.encrypted": "0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.osd_id": "0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.type": "block",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.vdo": "0",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:                "ceph.with_tpm": "0"
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            },
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "type": "block",
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:            "vg_name": "ceph_vg0"
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:        }
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]:    ]
Mar  1 05:19:06 np0005634532 sharp_galileo[287031]: }
Mar  1 05:19:06 np0005634532 systemd[1]: libpod-8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c.scope: Deactivated successfully.
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.471887051 +0000 UTC m=+0.473679430 container died 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True)
Mar  1 05:19:06 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6137b85f622b01aa003d9849ec9fca31ce787bc2dc31421e8fd2d9fb95bfdf47-merged.mount: Deactivated successfully.
Mar  1 05:19:06 np0005634532 podman[287014]: 2026-03-01 10:19:06.520223127 +0000 UTC m=+0.522015496 container remove 8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sharp_galileo, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325)
Mar  1 05:19:06 np0005634532 systemd[1]: libpod-conmon-8d4cca7d7cc54b5c0460ac4bc1efc3084a2a814c5349d5f235ab22f5e3d07d7c.scope: Deactivated successfully.
Mar  1 05:19:06 np0005634532 nova_compute[257049]: 2026-03-01 10:19:06.902 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:19:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:07] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.062698754 +0000 UTC m=+0.038130396 container create dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 05:19:07 np0005634532 systemd[1]: Started libpod-conmon-dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71.scope.
Mar  1 05:19:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.13060102 +0000 UTC m=+0.106032672 container init dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.135557432 +0000 UTC m=+0.110989074 container start dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:19:07 np0005634532 focused_tu[287161]: 167 167
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.138824492 +0000 UTC m=+0.114256164 container attach dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0)
Mar  1 05:19:07 np0005634532 systemd[1]: libpod-dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71.scope: Deactivated successfully.
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.140168945 +0000 UTC m=+0.115600597 container died dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.047546343 +0000 UTC m=+0.022977995 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:07 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e46ebfaf200ec7a2742309c90efc3f8b7ef0c25debda4517f8bf789d23e07212-merged.mount: Deactivated successfully.
Mar  1 05:19:07 np0005634532 podman[287145]: 2026-03-01 10:19:07.176426634 +0000 UTC m=+0.151858296 container remove dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 05:19:07 np0005634532 systemd[1]: libpod-conmon-dacd6e6588a50117120ff695763f7cf26aa14ae1403b32ebe2b70c16a61eaf71.scope: Deactivated successfully.
Mar  1 05:19:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:07.299Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:07 np0005634532 podman[287186]: 2026-03-01 10:19:07.313737993 +0000 UTC m=+0.041854938 container create a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:19:07 np0005634532 systemd[1]: Started libpod-conmon-a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8.scope.
Mar  1 05:19:07 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:19:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705ca565d88d9dca30fc7ca01faabadc3e37ab61aa4fd022598a8489573d4776/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705ca565d88d9dca30fc7ca01faabadc3e37ab61aa4fd022598a8489573d4776/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705ca565d88d9dca30fc7ca01faabadc3e37ab61aa4fd022598a8489573d4776/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:07 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/705ca565d88d9dca30fc7ca01faabadc3e37ab61aa4fd022598a8489573d4776/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:19:07 np0005634532 podman[287186]: 2026-03-01 10:19:07.293972618 +0000 UTC m=+0.022089553 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:19:07 np0005634532 podman[287186]: 2026-03-01 10:19:07.389780848 +0000 UTC m=+0.117897833 container init a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:19:07 np0005634532 podman[287186]: 2026-03-01 10:19:07.396483433 +0000 UTC m=+0.124600328 container start a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325)
Mar  1 05:19:07 np0005634532 podman[287186]: 2026-03-01 10:19:07.399781403 +0000 UTC m=+0.127898308 container attach a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 05:19:07 np0005634532 lvm[287278]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:19:07 np0005634532 lvm[287278]: VG ceph_vg0 finished
Mar  1 05:19:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1100: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 572 B/s rd, 0 op/s
Mar  1 05:19:08 np0005634532 dreamy_mcnulty[287203]: {}
Mar  1 05:19:08 np0005634532 systemd[1]: libpod-a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8.scope: Deactivated successfully.
Mar  1 05:19:08 np0005634532 podman[287186]: 2026-03-01 10:19:08.02294428 +0000 UTC m=+0.751061215 container died a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 05:19:08 np0005634532 systemd[1]: var-lib-containers-storage-overlay-705ca565d88d9dca30fc7ca01faabadc3e37ab61aa4fd022598a8489573d4776-merged.mount: Deactivated successfully.
Mar  1 05:19:08 np0005634532 podman[287186]: 2026-03-01 10:19:08.070958878 +0000 UTC m=+0.799075773 container remove a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_mcnulty, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Mar  1 05:19:08 np0005634532 systemd[1]: libpod-conmon-a229143a74636142f4a037a2de6132f695ae6b02240ac6399e221bcfdac363f8.scope: Deactivated successfully.
Mar  1 05:19:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:19:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:19:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:19:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:08.206 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:19:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:19:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:08.316 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:19:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:08.846Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:19:09 np0005634532 nova_compute[257049]: 2026-03-01 10:19:09.325 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1101: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 572 B/s rd, 0 op/s
Mar  1 05:19:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:10.209 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:10.319 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.191756) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351191791, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 2494, "num_deletes": 251, "total_data_size": 4444602, "memory_usage": 4513456, "flush_reason": "Manual Compaction"}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351206352, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 4332810, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 29488, "largest_seqno": 31981, "table_properties": {"data_size": 4320802, "index_size": 7542, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3397, "raw_key_size": 30000, "raw_average_key_size": 22, "raw_value_size": 4295157, "raw_average_value_size": 3191, "num_data_blocks": 321, "num_entries": 1346, "num_filter_entries": 1346, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360149, "oldest_key_time": 1772360149, "file_creation_time": 1772360351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 14639 microseconds, and 6168 cpu microseconds.
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.206391) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 4332810 bytes OK
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.206408) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.208430) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.208442) EVENT_LOG_v1 {"time_micros": 1772360351208438, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.208457) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 4433397, prev total WAL file size 4433661, number of live WAL files 2.
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.209174) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(4231KB)], [65(11MB)]
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351209259, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 16761073, "oldest_snapshot_seqno": -1}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 6577 keys, 14668322 bytes, temperature: kUnknown
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351284517, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 14668322, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 14625063, "index_size": 25739, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 169059, "raw_average_key_size": 25, "raw_value_size": 14507533, "raw_average_value_size": 2205, "num_data_blocks": 1035, "num_entries": 6577, "num_filter_entries": 6577, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360351, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.284717) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 14668322 bytes
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.286148) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 222.6 rd, 194.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(4.1, 11.9 +0.0 blob) out(14.0 +0.0 blob), read-write-amplify(7.3) write-amplify(3.4) OK, records in: 7098, records dropped: 521 output_compression: NoCompression
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.286165) EVENT_LOG_v1 {"time_micros": 1772360351286158, "job": 36, "event": "compaction_finished", "compaction_time_micros": 75299, "compaction_time_cpu_micros": 41277, "output_level": 6, "num_output_files": 1, "total_output_size": 14668322, "num_input_records": 7098, "num_output_records": 6577, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351286695, "job": 36, "event": "table_file_deletion", "file_number": 67}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360351287803, "job": 36, "event": "table_file_deletion", "file_number": 65}
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.209061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.287863) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.287868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.287870) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.287872) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:19:11.287874) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:19:11 np0005634532 nova_compute[257049]: 2026-03-01 10:19:11.904 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1102: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 572 B/s rd, 0 op/s
Mar  1 05:19:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:12.211 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:12.321 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1103: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 858 B/s rd, 0 op/s
Mar  1 05:19:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:14.216 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:14.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:14 np0005634532 nova_compute[257049]: 2026-03-01 10:19:14.328 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1104: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:16.219 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:19:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:16.325 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:19:16 np0005634532 nova_compute[257049]: 2026-03-01 10:19:16.906 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:17] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:19:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:17.300Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:19:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1105: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:19:18
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'vms', 'default.rgw.meta', '.nfs', '.mgr', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Mar  1 05:19:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:19:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:18.222 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:18.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:18.849Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:19 np0005634532 nova_compute[257049]: 2026-03-01 10:19:19.367 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:19:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1106: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:20.224 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:20.328 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:21 np0005634532 nova_compute[257049]: 2026-03-01 10:19:21.908 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1107: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:22.226 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:22.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:22 np0005634532 podman[287360]: 2026-03-01 10:19:22.412713549 +0000 UTC m=+0.102662359 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223)
Mar  1 05:19:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:19:23.893 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:19:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:19:23.894 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:19:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:19:23.894 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:19:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1108: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:24.228 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:24.331 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:24 np0005634532 nova_compute[257049]: 2026-03-01 10:19:24.371 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1109: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:26.231 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:26.333 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:26 np0005634532 podman[287391]: 2026-03-01 10:19:26.358235006 +0000 UTC m=+0.055113363 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Mar  1 05:19:26 np0005634532 nova_compute[257049]: 2026-03-01 10:19:26.909 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:19:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:19:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:27.301Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:27.301Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1110: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:28.233 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:28.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:28.851Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:29 np0005634532 nova_compute[257049]: 2026-03-01 10:19:29.374 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1111: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:30.236 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:30.336 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:31 np0005634532 nova_compute[257049]: 2026-03-01 10:19:31.910 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1112: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:32.238 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:32.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:19:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:19:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1113: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:34.241 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:34.340 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:34 np0005634532 nova_compute[257049]: 2026-03-01 10:19:34.394 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1114: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:36.244 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:36.342 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:36 np0005634532 nova_compute[257049]: 2026-03-01 10:19:36.913 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:19:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:19:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:37.302Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1115: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:38.247 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:38.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:38.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:38.852Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:19:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:38.852Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:19:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:39 np0005634532 nova_compute[257049]: 2026-03-01 10:19:39.429 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1116: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:40.250 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.002000049s ======
Mar  1 05:19:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:40.346 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.002000049s
Mar  1 05:19:41 np0005634532 nova_compute[257049]: 2026-03-01 10:19:41.914 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1117: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:42.253 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:42.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1118: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:44.256 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:44.350 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:44 np0005634532 nova_compute[257049]: 2026-03-01 10:19:44.431 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:44 np0005634532 nova_compute[257049]: 2026-03-01 10:19:44.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:45 np0005634532 nova_compute[257049]: 2026-03-01 10:19:45.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1119: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:46.258 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:46.352 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:46 np0005634532 nova_compute[257049]: 2026-03-01 10:19:46.915 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:19:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:47.304Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:47.304Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:19:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:47.304Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:19:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:19:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:19:47 np0005634532 nova_compute[257049]: 2026-03-01 10:19:47.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1120: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:48.261 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:48.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:48.853Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:48 np0005634532 nova_compute[257049]: 2026-03-01 10:19:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:48 np0005634532 nova_compute[257049]: 2026-03-01 10:19:48.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:19:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:49 np0005634532 nova_compute[257049]: 2026-03-01 10:19:49.480 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1121: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:50.263 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:50.355 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:50 np0005634532 nova_compute[257049]: 2026-03-01 10:19:50.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:50.998 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:50.999 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.000 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.000 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.000 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:19:51 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:19:51 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2976945860' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.422 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.613 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.616 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4521MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.617 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.617 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.918 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.929 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.929 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:19:51 np0005634532 nova_compute[257049]: 2026-03-01 10:19:51.945 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:19:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1122: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:52.266 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:52.357 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:19:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449771996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:19:52 np0005634532 nova_compute[257049]: 2026-03-01 10:19:52.392 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:19:52 np0005634532 nova_compute[257049]: 2026-03-01 10:19:52.399 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:19:52 np0005634532 nova_compute[257049]: 2026-03-01 10:19:52.418 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:19:52 np0005634532 nova_compute[257049]: 2026-03-01 10:19:52.419 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:19:52 np0005634532 nova_compute[257049]: 2026-03-01 10:19:52.419 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:19:53 np0005634532 podman[287509]: 2026-03-01 10:19:53.416924696 +0000 UTC m=+0.100321612 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.43.0)
Mar  1 05:19:53 np0005634532 nova_compute[257049]: 2026-03-01 10:19:53.420 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1123: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:19:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:54.268 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:54.359 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:54 np0005634532 nova_compute[257049]: 2026-03-01 10:19:54.482 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:54 np0005634532 nova_compute[257049]: 2026-03-01 10:19:54.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:55 np0005634532 nova_compute[257049]: 2026-03-01 10:19:55.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:55 np0005634532 nova_compute[257049]: 2026-03-01 10:19:55.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:19:55 np0005634532 nova_compute[257049]: 2026-03-01 10:19:55.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:19:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1124: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:56 np0005634532 nova_compute[257049]: 2026-03-01 10:19:56.001 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:19:56 np0005634532 nova_compute[257049]: 2026-03-01 10:19:56.001 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:19:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:19:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:56.271 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:19:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:56 np0005634532 nova_compute[257049]: 2026-03-01 10:19:56.920 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:19:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:19:57] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:19:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:19:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:57.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:19:57 np0005634532 podman[287539]: 2026-03-01 10:19:57.371303159 +0000 UTC m=+0.060950496 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Mar  1 05:19:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1125: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:19:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:19:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:19:58.273 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:19:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:19:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930812514' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:19:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:19:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/930812514' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:19:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:19:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:19:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:19:58.362 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:19:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:58.855Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:19:58.855Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:19:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:19:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:19:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:19:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:19:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:19:59 np0005634532 nova_compute[257049]: 2026-03-01 10:19:59.483 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:19:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1126: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] : [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.0.0.compute-1.sniivf on compute-1 is in error state
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: log_channel(cluster) log [WRN] :     daemon nfs.cephfs.1.0.compute-2.dqiiuk on compute-2 is in error state
Mar  1 05:20:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:00.276 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:00.364 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: Health detail: HEALTH_WARN 2 failed cephadm daemon(s)
Mar  1 05:20:00 np0005634532 ceph-mon[75825]: [WRN] CEPHADM_FAILED_DAEMON: 2 failed cephadm daemon(s)
Mar  1 05:20:00 np0005634532 ceph-mon[75825]:    daemon nfs.cephfs.0.0.compute-1.sniivf on compute-1 is in error state
Mar  1 05:20:00 np0005634532 ceph-mon[75825]:    daemon nfs.cephfs.1.0.compute-2.dqiiuk on compute-2 is in error state
Mar  1 05:20:01 np0005634532 nova_compute[257049]: 2026-03-01 10:20:01.922 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1127: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:02.278 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:02.366 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:20:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:20:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1128: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:04.282 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:04.368 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:04 np0005634532 nova_compute[257049]: 2026-03-01 10:20:04.486 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1129: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:06.284 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:06.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:06 np0005634532 nova_compute[257049]: 2026-03-01 10:20:06.925 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:20:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:07] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:20:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:07.305Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1130: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:20:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:08.287 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:20:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:08.371 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:08.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:20:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1131: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 828 B/s rd, 0 op/s
Mar  1 05:20:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1132: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 673 B/s rd, 0 op/s
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:20:09 np0005634532 nova_compute[257049]: 2026-03-01 10:20:09.530 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.74041248 +0000 UTC m=+0.044085072 container create eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325)
Mar  1 05:20:09 np0005634532 systemd[1]: Started libpod-conmon-eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9.scope.
Mar  1 05:20:09 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:09 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.721724052 +0000 UTC m=+0.025396664 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.825712533 +0000 UTC m=+0.129385205 container init eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.834217172 +0000 UTC m=+0.137889774 container start eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.837905832 +0000 UTC m=+0.141578434 container attach eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 05:20:09 np0005634532 intelligent_lehmann[287782]: 167 167
Mar  1 05:20:09 np0005634532 systemd[1]: libpod-eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9.scope: Deactivated successfully.
Mar  1 05:20:09 np0005634532 conmon[287782]: conmon eef3b8f73b0bd570d53a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9.scope/container/memory.events
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.843762766 +0000 UTC m=+0.147435348 container died eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:20:09 np0005634532 systemd[1]: var-lib-containers-storage-overlay-42fdabd38fe249a348cd3fd08173b974198f77e4bfabb27ff78446ca8c34b65e-merged.mount: Deactivated successfully.
Mar  1 05:20:09 np0005634532 podman[287766]: 2026-03-01 10:20:09.882234249 +0000 UTC m=+0.185906831 container remove eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325)
Mar  1 05:20:09 np0005634532 systemd[1]: libpod-conmon-eef3b8f73b0bd570d53aa9cee7fdb6f3c272480d96031786165fca9c974ec7c9.scope: Deactivated successfully.
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.01718432 +0000 UTC m=+0.051532005 container create 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:20:10 np0005634532 systemd[1]: Started libpod-conmon-1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc.scope.
Mar  1 05:20:10 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:10 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:09.997900107 +0000 UTC m=+0.032247822 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.109630568 +0000 UTC m=+0.143978263 container init 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.118863244 +0000 UTC m=+0.153210959 container start 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325)
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.123211981 +0000 UTC m=+0.157559696 container attach 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Mar  1 05:20:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:10.290 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:10.373 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:10 np0005634532 ecstatic_snyder[287822]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:20:10 np0005634532 ecstatic_snyder[287822]: --> All data devices are unavailable
Mar  1 05:20:10 np0005634532 systemd[1]: libpod-1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc.scope: Deactivated successfully.
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.449882124 +0000 UTC m=+0.484229869 container died 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:20:10 np0005634532 systemd[1]: var-lib-containers-storage-overlay-77faf7faa40ddc2b79b4b0e5e6425098af0d758b63bbb65549af95907bf130f1-merged.mount: Deactivated successfully.
Mar  1 05:20:10 np0005634532 podman[287806]: 2026-03-01 10:20:10.527611661 +0000 UTC m=+0.561959336 container remove 1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ecstatic_snyder, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Mar  1 05:20:10 np0005634532 systemd[1]: libpod-conmon-1c5ec430cdd42ce1b5b65ded33cb77cf1f647713853fa0a05742ffe17237d8cc.scope: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.020514552 +0000 UTC m=+0.041355496 container create 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:20:11 np0005634532 systemd[1]: Started libpod-conmon-0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a.scope.
Mar  1 05:20:11 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.091165815 +0000 UTC m=+0.112006769 container init 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:10.998471401 +0000 UTC m=+0.019312355 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.096892715 +0000 UTC m=+0.117733639 container start 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.100451993 +0000 UTC m=+0.121292947 container attach 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 05:20:11 np0005634532 optimistic_archimedes[287956]: 167 167
Mar  1 05:20:11 np0005634532 systemd[1]: libpod-0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a.scope: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.102944424 +0000 UTC m=+0.123785348 container died 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Mar  1 05:20:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1133: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 673 B/s rd, 0 op/s
Mar  1 05:20:11 np0005634532 systemd[1]: var-lib-containers-storage-overlay-fa0623beb572d47f4be87998bb22e91abfda371c1ec941e190dc63894145d081-merged.mount: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287940]: 2026-03-01 10:20:11.144285968 +0000 UTC m=+0.165126882 container remove 0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=optimistic_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:20:11 np0005634532 systemd[1]: libpod-conmon-0dd8e900b61b3ca6b5afd3ed987ade1d60de7e40530dcd2031fa1af11f76e04a.scope: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.312613417 +0000 UTC m=+0.054209041 container create 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Mar  1 05:20:11 np0005634532 systemd[1]: Started libpod-conmon-8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea.scope.
Mar  1 05:20:11 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e63a02d8d0744b70161fd050abb0406afcf331f9128efab825b63a9cb19771/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e63a02d8d0744b70161fd050abb0406afcf331f9128efab825b63a9cb19771/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e63a02d8d0744b70161fd050abb0406afcf331f9128efab825b63a9cb19771/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:11 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09e63a02d8d0744b70161fd050abb0406afcf331f9128efab825b63a9cb19771/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.290622828 +0000 UTC m=+0.032218542 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.387642428 +0000 UTC m=+0.129238072 container init 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.392472366 +0000 UTC m=+0.134067990 container start 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.395797258 +0000 UTC m=+0.137392902 container attach 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2)
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]: {
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:    "0": [
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:        {
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "devices": [
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "/dev/loop3"
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            ],
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "lv_name": "ceph_lv0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "lv_size": "21470642176",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "name": "ceph_lv0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "tags": {
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.cluster_name": "ceph",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.crush_device_class": "",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.encrypted": "0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.osd_id": "0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.type": "block",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.vdo": "0",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:                "ceph.with_tpm": "0"
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            },
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "type": "block",
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:            "vg_name": "ceph_vg0"
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:        }
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]:    ]
Mar  1 05:20:11 np0005634532 goofy_liskov[287996]: }
Mar  1 05:20:11 np0005634532 systemd[1]: libpod-8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea.scope: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.654488734 +0000 UTC m=+0.396084398 container died 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Mar  1 05:20:11 np0005634532 systemd[1]: var-lib-containers-storage-overlay-09e63a02d8d0744b70161fd050abb0406afcf331f9128efab825b63a9cb19771-merged.mount: Deactivated successfully.
Mar  1 05:20:11 np0005634532 podman[287980]: 2026-03-01 10:20:11.709341389 +0000 UTC m=+0.450937023 container remove 8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=goofy_liskov, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:20:11 np0005634532 systemd[1]: libpod-conmon-8ab174046efeca9e4bc46ef31de921e5606e736cb83bdba44ed995f8f62604ea.scope: Deactivated successfully.
Mar  1 05:20:11 np0005634532 nova_compute[257049]: 2026-03-01 10:20:11.978 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.285115043 +0000 UTC m=+0.042557725 container create 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:20:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:12.292 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:12 np0005634532 systemd[1]: Started libpod-conmon-891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d.scope.
Mar  1 05:20:12 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.267827299 +0000 UTC m=+0.025269961 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.363914296 +0000 UTC m=+0.121356958 container init 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.36935082 +0000 UTC m=+0.126793492 container start 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1)
Mar  1 05:20:12 np0005634532 systemd[1]: libpod-891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d.scope: Deactivated successfully.
Mar  1 05:20:12 np0005634532 intelligent_dirac[288131]: 167 167
Mar  1 05:20:12 np0005634532 conmon[288131]: conmon 891f9b6e672c2e6be6af <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d.scope/container/memory.events
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.375901671 +0000 UTC m=+0.133344363 container attach 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Mar  1 05:20:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.376233139 +0000 UTC m=+0.133675801 container died 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS)
Mar  1 05:20:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:12.374 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:12 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7e7a33aa6d541fe88230e5d070da5297f585404b95be6941ee038fe7ed6ff172-merged.mount: Deactivated successfully.
Mar  1 05:20:12 np0005634532 podman[288115]: 2026-03-01 10:20:12.41666173 +0000 UTC m=+0.174104432 container remove 891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=intelligent_dirac, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:20:12 np0005634532 systemd[1]: libpod-conmon-891f9b6e672c2e6be6afd21c8a5207931a8770ff54bd0cf024032b0271fcd62d.scope: Deactivated successfully.
Mar  1 05:20:12 np0005634532 podman[288155]: 2026-03-01 10:20:12.602653993 +0000 UTC m=+0.057524062 container create fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0)
Mar  1 05:20:12 np0005634532 systemd[1]: Started libpod-conmon-fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd.scope.
Mar  1 05:20:12 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:20:12 np0005634532 podman[288155]: 2026-03-01 10:20:12.577945247 +0000 UTC m=+0.032815386 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:20:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbb60f2c760af1d86aefb9e1fd8988f7ca0a9113d6a4e4478aa632d5f95b90e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbb60f2c760af1d86aefb9e1fd8988f7ca0a9113d6a4e4478aa632d5f95b90e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbb60f2c760af1d86aefb9e1fd8988f7ca0a9113d6a4e4478aa632d5f95b90e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:12 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffbb60f2c760af1d86aefb9e1fd8988f7ca0a9113d6a4e4478aa632d5f95b90e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:20:12 np0005634532 podman[288155]: 2026-03-01 10:20:12.692379814 +0000 UTC m=+0.147249923 container init fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:20:12 np0005634532 podman[288155]: 2026-03-01 10:20:12.702149414 +0000 UTC m=+0.157019483 container start fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:20:12 np0005634532 podman[288155]: 2026-03-01 10:20:12.705732082 +0000 UTC m=+0.160602211 container attach fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:20:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1134: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 336 B/s rd, 0 op/s
Mar  1 05:20:13 np0005634532 lvm[288245]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:20:13 np0005634532 lvm[288245]: VG ceph_vg0 finished
Mar  1 05:20:13 np0005634532 agitated_kare[288171]: {}
Mar  1 05:20:13 np0005634532 systemd[1]: libpod-fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd.scope: Deactivated successfully.
Mar  1 05:20:13 np0005634532 podman[288155]: 2026-03-01 10:20:13.383787225 +0000 UTC m=+0.838657304 container died fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:20:13 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ffbb60f2c760af1d86aefb9e1fd8988f7ca0a9113d6a4e4478aa632d5f95b90e-merged.mount: Deactivated successfully.
Mar  1 05:20:13 np0005634532 podman[288155]: 2026-03-01 10:20:13.431758002 +0000 UTC m=+0.886628061 container remove fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=agitated_kare, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:20:13 np0005634532 systemd[1]: libpod-conmon-fe9b74ceb4b94fcc48f36b09f187988220b0181ad638e88aa311c277fbe07bcd.scope: Deactivated successfully.
Mar  1 05:20:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:20:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:20:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:14.295 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:14.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:14 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:14 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:20:14 np0005634532 nova_compute[257049]: 2026-03-01 10:20:14.533 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1135: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 673 B/s rd, 0 op/s
Mar  1 05:20:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:16.298 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:16.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:16 np0005634532 nova_compute[257049]: 2026-03-01 10:20:16.980 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:17] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1136: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 673 B/s rd, 0 op/s
Mar  1 05:20:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:17.306Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:20:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:20:18
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['default.rgw.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', '.mgr', '.nfs', 'backups', 'volumes', 'cephfs.cephfs.meta', 'images', '.rgw.root', 'default.rgw.log']
Mar  1 05:20:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:20:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:18.301 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:18.380 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:18.856Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1137: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Mar  1 05:20:19 np0005634532 nova_compute[257049]: 2026-03-01 10:20:19.584 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:20:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:20:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:20.305 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:20.382 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1138: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:21 np0005634532 nova_compute[257049]: 2026-03-01 10:20:21.982 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:22.307 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:20:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:22.384 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:20:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1139: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:20:23.894 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:20:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:20:23.895 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:20:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:20:23.895 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:20:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:20:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:24.311 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:20:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:24.386 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:24 np0005634532 podman[288323]: 2026-03-01 10:20:24.40395876 +0000 UTC m=+0.090651955 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, io.buildah.version=1.43.0, org.label-schema.build-date=20260223)
Mar  1 05:20:24 np0005634532 nova_compute[257049]: 2026-03-01 10:20:24.586 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1140: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:26.315 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:26.388 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:26 np0005634532 nova_compute[257049]: 2026-03-01 10:20:26.985 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:20:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:27] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:20:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1141: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:27.308Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:28.317 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:28.390 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:28 np0005634532 podman[288353]: 2026-03-01 10:20:28.412206225 +0000 UTC m=+0.097272188 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:20:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:28.857Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:20:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:28.858Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:20:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:28.858Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:20:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1142: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:29 np0005634532 nova_compute[257049]: 2026-03-01 10:20:29.587 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:30.320 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:30.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1143: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:32 np0005634532 nova_compute[257049]: 2026-03-01 10:20:32.030 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:32.323 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:32.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:20:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:20:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1144: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:34.326 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:34.395 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:34 np0005634532 nova_compute[257049]: 2026-03-01 10:20:34.591 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1145: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:36.329 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:36.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:37 np0005634532 nova_compute[257049]: 2026-03-01 10:20:37.034 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:20:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:37] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:20:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1146: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:37.310Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:38.332 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:38.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:38.859Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1147: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:39 np0005634532 nova_compute[257049]: 2026-03-01 10:20:39.606 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:20:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:40.335 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:20:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:40.400 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1148: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:42 np0005634532 nova_compute[257049]: 2026-03-01 10:20:42.057 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:42.338 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:42.402 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1149: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:44.341 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:44.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:44 np0005634532 nova_compute[257049]: 2026-03-01 10:20:44.632 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1150: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:46.344 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:46.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:46 np0005634532 nova_compute[257049]: 2026-03-01 10:20:46.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:46 np0005634532 nova_compute[257049]: 2026-03-01 10:20:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:46 np0005634532 nova_compute[257049]: 2026-03-01 10:20:46.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:47] "GET /metrics HTTP/1.1" 200 48453 "" "Prometheus/2.51.0"
Mar  1 05:20:47 np0005634532 nova_compute[257049]: 2026-03-01 10:20:47.060 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1151: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:47.311Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:20:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:20:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:20:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:20:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:48.348 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:20:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:48.406 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:48.862Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1152: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:49 np0005634532 nova_compute[257049]: 2026-03-01 10:20:49.675 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:50 np0005634532 nova_compute[257049]: 2026-03-01 10:20:50.009 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:50 np0005634532 nova_compute[257049]: 2026-03-01 10:20:50.010 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:50 np0005634532 nova_compute[257049]: 2026-03-01 10:20:50.010 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:20:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:50.351 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:20:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:50.409 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:20:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1153: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:51 np0005634532 nova_compute[257049]: 2026-03-01 10:20:51.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.000 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.001 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.001 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.002 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.002 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.063 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:52.354 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:52.411 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:20:52 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3644982184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.530 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.664 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.665 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4533MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.665 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.665 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.787 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.787 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:20:52 np0005634532 nova_compute[257049]: 2026-03-01 10:20:52.868 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:20:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1154: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:20:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2163216082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:20:53 np0005634532 nova_compute[257049]: 2026-03-01 10:20:53.292 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:20:53 np0005634532 nova_compute[257049]: 2026-03-01 10:20:53.300 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:20:53 np0005634532 nova_compute[257049]: 2026-03-01 10:20:53.323 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:20:53 np0005634532 nova_compute[257049]: 2026-03-01 10:20:53.324 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:20:53 np0005634532 nova_compute[257049]: 2026-03-01 10:20:53.325 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:20:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:54.358 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:54.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:54 np0005634532 nova_compute[257049]: 2026-03-01 10:20:54.724 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:54 np0005634532 nova_compute[257049]: 2026-03-01 10:20:54.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:54 np0005634532 nova_compute[257049]: 2026-03-01 10:20:54.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:54 np0005634532 nova_compute[257049]: 2026-03-01 10:20:54.978 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Mar  1 05:20:54 np0005634532 nova_compute[257049]: 2026-03-01 10:20:54.994 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Mar  1 05:20:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:20:55 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 7360 writes, 32K keys, 7360 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 7360 writes, 7360 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1592 writes, 7119 keys, 1592 commit groups, 1.0 writes per commit group, ingest: 11.89 MB, 0.02 MB/s#012Interval WAL: 1592 writes, 1592 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0    135.5      0.39              0.11        18    0.021       0      0       0.0       0.0#012  L6      1/0   13.99 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.3    212.3    181.4      1.24              0.47        17    0.073     93K   9475       0.0       0.0#012 Sum      1/0   13.99 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.3    161.9    170.5      1.63              0.58        35    0.047     93K   9475       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   5.8    220.6    226.8      0.30              0.15         8    0.037     26K   2577       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    212.3    181.4      1.24              0.47        17    0.073     93K   9475       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0    136.4      0.38              0.11        17    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     18.2      0.00              0.00         1    0.003       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.051, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.27 GB write, 0.12 MB/s write, 0.26 GB read, 0.11 MB/s read, 1.6 seconds#012Interval compaction: 0.07 GB write, 0.11 MB/s write, 0.06 GB read, 0.11 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x563d94b81350#2 capacity: 304.00 MB usage: 23.45 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000212 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1473,22.71 MB,7.47187%) FilterBlock(36,278.30 KB,0.0893994%) IndexBlock(36,479.98 KB,0.154189%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Mar  1 05:20:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1155: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:55 np0005634532 podman[288469]: 2026-03-01 10:20:55.370757017 +0000 UTC m=+0.064073993 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20260223, tcib_managed=true)
Mar  1 05:20:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:56.361 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:56.415 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:56 np0005634532 nova_compute[257049]: 2026-03-01 10:20:56.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:56 np0005634532 nova_compute[257049]: 2026-03-01 10:20:56.978 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:56 np0005634532 nova_compute[257049]: 2026-03-01 10:20:56.979 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Mar  1 05:20:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:20:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:20:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:20:57 np0005634532 nova_compute[257049]: 2026-03-01 10:20:57.065 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:20:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:20:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1156: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:20:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:57.312Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:57 np0005634532 nova_compute[257049]: 2026-03-01 10:20:57.994 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:57 np0005634532 nova_compute[257049]: 2026-03-01 10:20:57.994 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:20:57 np0005634532 nova_compute[257049]: 2026-03-01 10:20:57.995 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:20:58 np0005634532 nova_compute[257049]: 2026-03-01 10:20:58.008 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:20:58 np0005634532 nova_compute[257049]: 2026-03-01 10:20:58.009 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:20:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:20:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095078651' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:20:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:20:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4095078651' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:20:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:20:58.363 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:20:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:20:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:20:58.418 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:20:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:20:58.863Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:20:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:20:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:20:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:20:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:20:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:20:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1157: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:20:59 np0005634532 podman[288502]: 2026-03-01 10:20:59.367707255 +0000 UTC m=+0.054179430 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.43.0)
Mar  1 05:20:59 np0005634532 nova_compute[257049]: 2026-03-01 10:20:59.728 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:21:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:00.367 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:21:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:00.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:00 np0005634532 nova_compute[257049]: 2026-03-01 10:21:00.987 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1158: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:02 np0005634532 nova_compute[257049]: 2026-03-01 10:21:02.070 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:02.370 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:21:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:02.421 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:21:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:21:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:21:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1159: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:04.372 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:04.423 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:04 np0005634532 nova_compute[257049]: 2026-03-01 10:21:04.778 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1160: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:06.376 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:06.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:21:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:21:07 np0005634532 nova_compute[257049]: 2026-03-01 10:21:07.070 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1161: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:07.313Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:08.378 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:08.427 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:08.865Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:08.865Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1162: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:09 np0005634532 nova_compute[257049]: 2026-03-01 10:21:09.780 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:10.381 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:10.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1163: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:12 np0005634532 nova_compute[257049]: 2026-03-01 10:21:12.073 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:21:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:12.383 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:21:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:12.431 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1164: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:14.385 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:14.433 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:14 np0005634532 podman[288687]: 2026-03-01 10:21:14.598706068 +0000 UTC m=+0.085887908 container exec 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:14 np0005634532 podman[288687]: 2026-03-01 10:21:14.691366721 +0000 UTC m=+0.178548521 container exec_died 6664049ace048b4adddae1365cde7c16773d892ae197c1350f58a5e5b1183392 (image=quay.io/ceph/ceph:v19, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mon-compute-0, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 05:21:14 np0005634532 nova_compute[257049]: 2026-03-01 10:21:14.783 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:15 np0005634532 podman[288819]: 2026-03-01 10:21:15.098849796 +0000 UTC m=+0.044145084 container exec 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1165: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:15 np0005634532 podman[288844]: 2026-03-01 10:21:15.160109679 +0000 UTC m=+0.048023009 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:15 np0005634532 podman[288819]: 2026-03-01 10:21:15.16464363 +0000 UTC m=+0.109938898 container exec_died 1fcbd8a251307aedf47cfda71fbf2d9d43c45db96f7135811488f98478336155 (image=quay.io/prometheus/node-exporter:v1.7.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-node-exporter-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:15 np0005634532 podman[288893]: 2026-03-01 10:21:15.337852439 +0000 UTC m=+0.042874952 container exec 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:15 np0005634532 podman[288914]: 2026-03-01 10:21:15.397139934 +0000 UTC m=+0.046462481 container exec_died 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1)
Mar  1 05:21:15 np0005634532 podman[288893]: 2026-03-01 10:21:15.402183498 +0000 UTC m=+0.107205981 container exec_died 2f9b37e0130c0cc03064ea231c9242d7bbbbeba52fa751385a16fea7b57e54bf (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325)
Mar  1 05:21:15 np0005634532 podman[288960]: 2026-03-01 10:21:15.55270818 +0000 UTC m=+0.047918886 container exec ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 05:21:15 np0005634532 podman[288960]: 2026-03-01 10:21:15.563346651 +0000 UTC m=+0.058557337 container exec_died ae199754250e4490488a372482082794890684d286b81ae5dbba7ee71da86544 (image=quay.io/ceph/haproxy:2.3, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-haproxy-nfs-cephfs-compute-0-wdbjdw)
Mar  1 05:21:15 np0005634532 podman[289027]: 2026-03-01 10:21:15.74960097 +0000 UTC m=+0.053617666 container exec 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9, release=1793, build-date=2023-02-22T09:23:20, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, io.buildah.version=1.28.2, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=keepalived for Ceph, version=2.2.4, summary=Provides keepalived on RHEL 9 for Ceph., maintainer=Guillaume Abrioux <gabrioux@redhat.com>, io.openshift.tags=Ceph keepalived, vendor=Red Hat, Inc., io.k8s.display-name=Keepalived on RHEL 9, name=keepalived, vcs-type=git, com.redhat.component=keepalived-container)
Mar  1 05:21:15 np0005634532 podman[289027]: 2026-03-01 10:21:15.761579364 +0000 UTC m=+0.065596000 container exec_died 05be6442ca5291b879ba04bc347755b274cffc38684767a6255c033b9adb2149 (image=quay.io/ceph/keepalived:2.2.4, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-keepalived-nfs-cephfs-compute-0-qbujzh, build-date=2023-02-22T09:23:20, maintainer=Guillaume Abrioux <gabrioux@redhat.com>, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=Ceph keepalived, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9-minimal/images/9.1.0-1793, release=1793, io.openshift.expose-services=, vcs-type=git, version=2.2.4, com.redhat.component=keepalived-container, io.buildah.version=1.28.2, io.k8s.display-name=Keepalived on RHEL 9, summary=Provides keepalived on RHEL 9 for Ceph., vendor=Red Hat, Inc., name=keepalived, description=keepalived for Ceph, vcs-ref=befaf1f5ec7b874aef2651ee1384d51828504eb9)
Mar  1 05:21:15 np0005634532 podman[289093]: 2026-03-01 10:21:15.955831509 +0000 UTC m=+0.050372637 container exec 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:15 np0005634532 podman[289093]: 2026-03-01 10:21:15.977121211 +0000 UTC m=+0.071662339 container exec_died 42a0f9f88cfb3d053b26a487d34447f1453d64f75d2848aac14e4c543e6921a8 (image=quay.io/prometheus/alertmanager:v0.25.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:16 np0005634532 podman[289165]: 2026-03-01 10:21:16.187180544 +0000 UTC m=+0.050466729 container exec 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 05:21:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:16.389 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:16 np0005634532 podman[289165]: 2026-03-01 10:21:16.392740397 +0000 UTC m=+0.256026552 container exec_died 40811b6273ff1144683076a6db09789183012c587046fdcbb700953f5b5ef2ca (image=quay.io/ceph/grafana:10.4.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0, maintainer=Grafana Labs <hello@grafana.com>)
Mar  1 05:21:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:21:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:16.434 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:21:16 np0005634532 podman[289279]: 2026-03-01 10:21:16.668292266 +0000 UTC m=+0.047764842 container exec 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:16 np0005634532 podman[289279]: 2026-03-01 10:21:16.719360779 +0000 UTC m=+0.098833355 container exec_died 225e705b62c2d5344d04ef96534eb44bff62165a23fc4cc7bc75fa8452e91ac1 (image=quay.io/prometheus/prometheus:v2.51.0, name=ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-prometheus-compute-0, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Mar  1 05:21:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:21:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:21:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:21:17 np0005634532 nova_compute[257049]: 2026-03-01 10:21:17.073 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1166: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1167: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:17.314Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.703725037 +0000 UTC m=+0.039117561 container create 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:21:17 np0005634532 systemd[1]: Started libpod-conmon-2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6.scope.
Mar  1 05:21:17 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.762751135 +0000 UTC m=+0.098143659 container init 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.767282356 +0000 UTC m=+0.102674880 container start 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:21:17 np0005634532 wonderful_bose[289512]: 167 167
Mar  1 05:21:17 np0005634532 systemd[1]: libpod-2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6.scope: Deactivated successfully.
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.771300214 +0000 UTC m=+0.106692738 container attach 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, ceph=True)
Mar  1 05:21:17 np0005634532 conmon[289512]: conmon 2b9b8d922f9ff5eb0039 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6.scope/container/memory.events
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.771777266 +0000 UTC m=+0.107169790 container died 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.685953321 +0000 UTC m=+0.021345935 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:17 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:21:17 np0005634532 systemd[1]: var-lib-containers-storage-overlay-20238e70c711ddb76d34fa2610f0f12788e5aff3f32e20c67e722353c13f3aa8-merged.mount: Deactivated successfully.
Mar  1 05:21:17 np0005634532 podman[289496]: 2026-03-01 10:21:17.810950327 +0000 UTC m=+0.146342851 container remove 2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=wonderful_bose, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:17 np0005634532 systemd[1]: libpod-conmon-2b9b8d922f9ff5eb003940ca0e5a7fd4b6816697daf543e437a2db7bfd2884e6.scope: Deactivated successfully.
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:17 np0005634532 podman[289535]: 2026-03-01 10:21:17.92155216 +0000 UTC m=+0.031167625 container create e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:21:17 np0005634532 systemd[1]: Started libpod-conmon-e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed.scope.
Mar  1 05:21:17 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:17 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:17 np0005634532 podman[289535]: 2026-03-01 10:21:17.981159362 +0000 UTC m=+0.090774847 container init e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 05:21:17 np0005634532 podman[289535]: 2026-03-01 10:21:17.985626712 +0000 UTC m=+0.095242177 container start e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250325)
Mar  1 05:21:17 np0005634532 podman[289535]: 2026-03-01 10:21:17.988772669 +0000 UTC m=+0.098388154 container attach e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:21:18 np0005634532 podman[289535]: 2026-03-01 10:21:17.907744622 +0000 UTC m=+0.017360077 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:21:18
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', 'default.rgw.control', '.rgw.root', '.nfs', '.mgr', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'backups', 'default.rgw.log']
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:21:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:21:18 np0005634532 funny_cray[289552]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:21:18 np0005634532 funny_cray[289552]: --> All data devices are unavailable
Mar  1 05:21:18 np0005634532 systemd[1]: libpod-e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed.scope: Deactivated successfully.
Mar  1 05:21:18 np0005634532 podman[289535]: 2026-03-01 10:21:18.294811867 +0000 UTC m=+0.404427332 container died e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=squid)
Mar  1 05:21:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ff839cb7f69ece02ed6418131430e1cf0952d1b947d4307a4c41ba5fe22fe8f2-merged.mount: Deactivated successfully.
Mar  1 05:21:18 np0005634532 podman[289535]: 2026-03-01 10:21:18.332883261 +0000 UTC m=+0.442498726 container remove e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=funny_cray, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True)
Mar  1 05:21:18 np0005634532 systemd[1]: libpod-conmon-e8ba7b32565eb3796e076dd0a0639601a24ebc0a5eb9ce26f5b59eb1342295ed.scope: Deactivated successfully.
Mar  1 05:21:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:18.392 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:18.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.823379302 +0000 UTC m=+0.050178202 container create d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:21:18 np0005634532 systemd[1]: Started libpod-conmon-d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d.scope.
Mar  1 05:21:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:18.866Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:21:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:18.868Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:21:18 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.805086123 +0000 UTC m=+0.031885013 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.906320477 +0000 UTC m=+0.133119367 container init d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.910584341 +0000 UTC m=+0.137383211 container start d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.913655487 +0000 UTC m=+0.140454357 container attach d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:21:18 np0005634532 systemd[1]: libpod-d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d.scope: Deactivated successfully.
Mar  1 05:21:18 np0005634532 distracted_wilson[289687]: 167 167
Mar  1 05:21:18 np0005634532 conmon[289687]: conmon d0c9e99e3038c0cc739c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d.scope/container/memory.events
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.915862291 +0000 UTC m=+0.142661161 container died d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Mar  1 05:21:18 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f6fef86aa140e8ba32d60f017ada668e4368d3304f5a9e3b100b016ded5fd06b-merged.mount: Deactivated successfully.
Mar  1 05:21:18 np0005634532 podman[289671]: 2026-03-01 10:21:18.947685841 +0000 UTC m=+0.174484701 container remove d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_wilson, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Mar  1 05:21:18 np0005634532 systemd[1]: libpod-conmon-d0c9e99e3038c0cc739c4cca02b800e909092536d8ef9bf10470d7e964cf012d.scope: Deactivated successfully.
Mar  1 05:21:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.068924745 +0000 UTC m=+0.030985241 container create cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 05:21:19 np0005634532 systemd[1]: Started libpod-conmon-cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74.scope.
Mar  1 05:21:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb49378c4cb5b1759b9df448476fa6d7e94f525355d96bc9721048400a88012/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb49378c4cb5b1759b9df448476fa6d7e94f525355d96bc9721048400a88012/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb49378c4cb5b1759b9df448476fa6d7e94f525355d96bc9721048400a88012/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:19 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bb49378c4cb5b1759b9df448476fa6d7e94f525355d96bc9721048400a88012/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.128379774 +0000 UTC m=+0.090440270 container init cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.134648378 +0000 UTC m=+0.096708874 container start cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.137849966 +0000 UTC m=+0.099910462 container attach cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.055747602 +0000 UTC m=+0.017808118 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1168: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]: {
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:    "0": [
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:        {
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "devices": [
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "/dev/loop3"
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            ],
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "lv_name": "ceph_lv0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "lv_size": "21470642176",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "name": "ceph_lv0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "tags": {
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.cluster_name": "ceph",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.crush_device_class": "",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.encrypted": "0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.osd_id": "0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.type": "block",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.vdo": "0",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:                "ceph.with_tpm": "0"
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            },
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "type": "block",
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:            "vg_name": "ceph_vg0"
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:        }
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]:    ]
Mar  1 05:21:19 np0005634532 dreamy_hellman[289727]: }
Mar  1 05:21:19 np0005634532 systemd[1]: libpod-cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74.scope: Deactivated successfully.
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.394239176 +0000 UTC m=+0.356299682 container died cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True)
Mar  1 05:21:19 np0005634532 systemd[1]: var-lib-containers-storage-overlay-6bb49378c4cb5b1759b9df448476fa6d7e94f525355d96bc9721048400a88012-merged.mount: Deactivated successfully.
Mar  1 05:21:19 np0005634532 podman[289710]: 2026-03-01 10:21:19.42945734 +0000 UTC m=+0.391517846 container remove cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=dreamy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:21:19 np0005634532 systemd[1]: libpod-conmon-cff063d854fdd4ac731b11612f3850aadddd3da28b83fca6c8b167d85ccb6b74.scope: Deactivated successfully.
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:21:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:21:19 np0005634532 nova_compute[257049]: 2026-03-01 10:21:19.827 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:19 np0005634532 podman[289836]: 2026-03-01 10:21:19.909374982 +0000 UTC m=+0.037026981 container create d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 05:21:19 np0005634532 systemd[1]: Started libpod-conmon-d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0.scope.
Mar  1 05:21:19 np0005634532 podman[289836]: 2026-03-01 10:21:19.891074672 +0000 UTC m=+0.018726721 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:19 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:20 np0005634532 podman[289836]: 2026-03-01 10:21:20.007015338 +0000 UTC m=+0.134667347 container init d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:20 np0005634532 podman[289836]: 2026-03-01 10:21:20.014895461 +0000 UTC m=+0.142547470 container start d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 05:21:20 np0005634532 podman[289836]: 2026-03-01 10:21:20.017575767 +0000 UTC m=+0.145227776 container attach d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:20 np0005634532 angry_nobel[289877]: 167 167
Mar  1 05:21:20 np0005634532 systemd[1]: libpod-d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0.scope: Deactivated successfully.
Mar  1 05:21:20 np0005634532 podman[289836]: 2026-03-01 10:21:20.020253993 +0000 UTC m=+0.147906032 container died d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:21:20 np0005634532 systemd[1]: var-lib-containers-storage-overlay-597f22474a9006748f1a631d0057975cbf98a2faabd532c9faf792e24d0b3842-merged.mount: Deactivated successfully.
Mar  1 05:21:20 np0005634532 podman[289836]: 2026-03-01 10:21:20.054051992 +0000 UTC m=+0.181703991 container remove d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_nobel, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:21:20 np0005634532 systemd[1]: libpod-conmon-d9a7caf0160203ac676f17faca89c45a73579cf2f681b8f513af54ccd0e467d0.scope: Deactivated successfully.
Mar  1 05:21:20 np0005634532 podman[289901]: 2026-03-01 10:21:20.204553454 +0000 UTC m=+0.069277581 container create 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS)
Mar  1 05:21:20 np0005634532 systemd[1]: Started libpod-conmon-5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901.scope.
Mar  1 05:21:20 np0005634532 podman[289901]: 2026-03-01 10:21:20.155756227 +0000 UTC m=+0.020480454 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:21:20 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:21:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313e4cae67e3ffd8bbe8542e072a40046c2928d16bc9d85ddb2c6a9e9ffb8140/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313e4cae67e3ffd8bbe8542e072a40046c2928d16bc9d85ddb2c6a9e9ffb8140/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313e4cae67e3ffd8bbe8542e072a40046c2928d16bc9d85ddb2c6a9e9ffb8140/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:20 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/313e4cae67e3ffd8bbe8542e072a40046c2928d16bc9d85ddb2c6a9e9ffb8140/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:21:20 np0005634532 podman[289901]: 2026-03-01 10:21:20.301942433 +0000 UTC m=+0.166666590 container init 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, io.buildah.version=1.40.1, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:21:20 np0005634532 podman[289901]: 2026-03-01 10:21:20.312145943 +0000 UTC m=+0.176870100 container start 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 05:21:20 np0005634532 podman[289901]: 2026-03-01 10:21:20.316258014 +0000 UTC m=+0.180982171 container attach 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 05:21:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:20.394 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:20.437 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:20 np0005634532 lvm[289993]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:21:20 np0005634532 lvm[289993]: VG ceph_vg0 finished
Mar  1 05:21:20 np0005634532 jolly_johnson[289918]: {}
Mar  1 05:21:21 np0005634532 systemd[1]: libpod-5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901.scope: Deactivated successfully.
Mar  1 05:21:21 np0005634532 podman[289901]: 2026-03-01 10:21:21.010297339 +0000 UTC m=+0.875021456 container died 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=squid)
Mar  1 05:21:21 np0005634532 systemd[1]: var-lib-containers-storage-overlay-313e4cae67e3ffd8bbe8542e072a40046c2928d16bc9d85ddb2c6a9e9ffb8140-merged.mount: Deactivated successfully.
Mar  1 05:21:21 np0005634532 podman[289901]: 2026-03-01 10:21:21.044677863 +0000 UTC m=+0.909401990 container remove 5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=jolly_johnson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 05:21:21 np0005634532 systemd[1]: libpod-conmon-5478f50ce7121e5b9cdd2943154c5bd9fc88b7a03f2bf844d5b6d1b1f6edd901.scope: Deactivated successfully.
Mar  1 05:21:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:21:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:21 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:21:21 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1169: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:21:22 np0005634532 nova_compute[257049]: 2026-03-01 10:21:22.103 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #69. Immutable memtables: 0.
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.154663) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 37] Flushing memtable with next log file: 69
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482154727, "job": 37, "event": "flush_started", "num_memtables": 1, "num_entries": 1403, "num_deletes": 250, "total_data_size": 2589651, "memory_usage": 2634376, "flush_reason": "Manual Compaction"}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 37] Level-0 flush table #70: started
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482170077, "cf_name": "default", "job": 37, "event": "table_file_creation", "file_number": 70, "file_size": 2510366, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 31983, "largest_seqno": 33384, "table_properties": {"data_size": 2503867, "index_size": 3634, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 12896, "raw_average_key_size": 18, "raw_value_size": 2490800, "raw_average_value_size": 3599, "num_data_blocks": 160, "num_entries": 692, "num_filter_entries": 692, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360351, "oldest_key_time": 1772360351, "file_creation_time": 1772360482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 70, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 37] Flush lasted 15467 microseconds, and 8665 cpu microseconds.
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.170133) [db/flush_job.cc:967] [default] [JOB 37] Level-0 flush table #70: 2510366 bytes OK
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.170157) [db/memtable_list.cc:519] [default] Level-0 commit table #70 started
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.171435) [db/memtable_list.cc:722] [default] Level-0 commit table #70: memtable #1 done
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.171447) EVENT_LOG_v1 {"time_micros": 1772360482171443, "job": 37, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.171462) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 37] Try to delete WAL files size 2583624, prev total WAL file size 2583624, number of live WAL files 2.
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000066.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.172042) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B7600323537' seq:72057594037927935, type:22 .. '6B7600353038' seq:0, type:0; will stop at (end)
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 38] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 37 Base level 0, inputs: [70(2451KB)], [68(13MB)]
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482172068, "job": 38, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [70], "files_L6": [68], "score": -1, "input_data_size": 17178688, "oldest_snapshot_seqno": -1}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 38] Generated table #71: 6755 keys, 15796470 bytes, temperature: kUnknown
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482240699, "cf_name": "default", "job": 38, "event": "table_file_creation", "file_number": 71, "file_size": 15796470, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15750834, "index_size": 27668, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16901, "raw_key_size": 174482, "raw_average_key_size": 25, "raw_value_size": 15628706, "raw_average_value_size": 2313, "num_data_blocks": 1105, "num_entries": 6755, "num_filter_entries": 6755, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360482, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 71, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.240849) [db/compaction/compaction_job.cc:1663] [default] [JOB 38] Compacted 1@0 + 1@6 files to L6 => 15796470 bytes
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.242453) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 250.1 rd, 230.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 14.0 +0.0 blob) out(15.1 +0.0 blob), read-write-amplify(13.1) write-amplify(6.3) OK, records in: 7269, records dropped: 514 output_compression: NoCompression
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.242468) EVENT_LOG_v1 {"time_micros": 1772360482242461, "job": 38, "event": "compaction_finished", "compaction_time_micros": 68678, "compaction_time_cpu_micros": 20458, "output_level": 6, "num_output_files": 1, "total_output_size": 15796470, "num_input_records": 7269, "num_output_records": 6755, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000070.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482242682, "job": 38, "event": "table_file_deletion", "file_number": 70}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000068.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360482243649, "job": 38, "event": "table_file_deletion", "file_number": 68}
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.171959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.243739) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.243745) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.243747) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.243749) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:21:22.243752) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:21:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:22.397 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:21:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:22.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:21:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1170: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:23.896 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:21:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:23.896 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:21:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:23.896 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:21:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:24.399 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:24.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:24 np0005634532 nova_compute[257049]: 2026-03-01 10:21:24.830 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1171: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:26.403 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:26 np0005634532 podman[290039]: 2026-03-01 10:21:26.415159356 +0000 UTC m=+0.095189076 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Mar  1 05:21:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:26.442 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:21:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:21:27 np0005634532 nova_compute[257049]: 2026-03-01 10:21:27.103 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1172: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 607 B/s rd, 0 op/s
Mar  1 05:21:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:27.314Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:27.314Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:21:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:27.314Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:21:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:28.405 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:28.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:28 np0005634532 nova_compute[257049]: 2026-03-01 10:21:28.519 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:28.869Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1173: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:29 np0005634532 nova_compute[257049]: 2026-03-01 10:21:29.834 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:30 np0005634532 podman[290070]: 2026-03-01 10:21:30.351610301 +0000 UTC m=+0.046881646 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, io.buildah.version=1.43.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:21:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:30.407 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:30.445 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1174: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:32 np0005634532 nova_compute[257049]: 2026-03-01 10:21:32.105 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:32 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-crash-compute-0[81339]: ERROR:ceph-crash:Error scraping /var/lib/ceph/crash: [Errno 13] Permission denied: '/var/lib/ceph/crash'
Mar  1 05:21:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:32.410 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:32.447 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:21:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:21:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1175: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:34.413 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:34.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:34 np0005634532 nova_compute[257049]: 2026-03-01 10:21:34.839 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1176: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:36 np0005634532 nova_compute[257049]: 2026-03-01 10:21:36.001 257053 DEBUG oslo_concurrency.processutils [None req-8ff44d0d-966f-4554-beb4-936d5793a879 d62057d608c848079ac65623b37b10ab 4d09211c005246538db05e74184b7e61 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:21:36 np0005634532 nova_compute[257049]: 2026-03-01 10:21:36.028 257053 DEBUG oslo_concurrency.processutils [None req-8ff44d0d-966f-4554-beb4-936d5793a879 d62057d608c848079ac65623b37b10ab 4d09211c005246538db05e74184b7e61 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.027s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:21:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:36.416 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:36.450 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:21:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:21:37 np0005634532 nova_compute[257049]: 2026-03-01 10:21:37.107 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1177: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:37.315Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:38.419 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:21:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:38.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:21:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:38.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1178: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:39 np0005634532 nova_compute[257049]: 2026-03-01 10:21:39.843 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:40.422 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:40.454 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1179: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:42 np0005634532 nova_compute[257049]: 2026-03-01 10:21:42.152 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:42.425 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:42.456 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:42 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:42.685 167541 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:77:84', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': 'd2:e0:96:ea:56:89'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Mar  1 05:21:42 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:42.687 167541 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Mar  1 05:21:42 np0005634532 nova_compute[257049]: 2026-03-01 10:21:42.687 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1180: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:44.428 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:44.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:44 np0005634532 nova_compute[257049]: 2026-03-01 10:21:44.847 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1181: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:21:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:46.429 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:21:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:46.460 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:21:47 np0005634532 nova_compute[257049]: 2026-03-01 10:21:47.122 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:47 np0005634532 nova_compute[257049]: 2026-03-01 10:21:47.123 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:47 np0005634532 nova_compute[257049]: 2026-03-01 10:21:47.187 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1182: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:47.317Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:47.317Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:21:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:47.317Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:21:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:21:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:21:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:48.432 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:48.462 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:48.870Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1183: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:49 np0005634532 nova_compute[257049]: 2026-03-01 10:21:49.851 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:50.435 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:50.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:50 np0005634532 nova_compute[257049]: 2026-03-01 10:21:50.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:50 np0005634532 nova_compute[257049]: 2026-03-01 10:21:50.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:21:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1184: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:51 np0005634532 nova_compute[257049]: 2026-03-01 10:21:51.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.189 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:52.438 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:52.466 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:52 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:21:52.690 167541 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=90b7dc66-b984-4d8b-9541-ddde79c5f544, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.997 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.997 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.997 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.997 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:21:52 np0005634532 nova_compute[257049]: 2026-03-01 10:21:52.998 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:21:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1185: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:21:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/914343394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.458 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.598 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.600 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4507MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.600 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.600 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.680 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.681 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.757 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.776 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.776 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.793 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.814 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:21:53 np0005634532 nova_compute[257049]: 2026-03-01 10:21:53.840 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:21:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:21:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2247190380' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.289 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.294 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.307 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.308 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.309 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:21:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:54.440 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:54.468 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:54 np0005634532 nova_compute[257049]: 2026-03-01 10:21:54.855 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1186: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:56.443 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:21:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:56.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:21:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:21:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:21:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:21:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:21:57 np0005634532 nova_compute[257049]: 2026-03-01 10:21:57.191 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:21:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1187: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:21:57 np0005634532 nova_compute[257049]: 2026-03-01 10:21:57.309 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:57.318Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:57.318Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:21:57 np0005634532 podman[290187]: 2026-03-01 10:21:57.379098333 +0000 UTC m=+0.072642381 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Mar  1 05:21:57 np0005634532 nova_compute[257049]: 2026-03-01 10:21:57.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:21:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152545258' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:21:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:21:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2152545258' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:21:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:21:58.446 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:21:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:21:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:21:58.472 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:21:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:21:58.871Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:21:58 np0005634532 nova_compute[257049]: 2026-03-01 10:21:58.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:58 np0005634532 nova_compute[257049]: 2026-03-01 10:21:58.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:21:58 np0005634532 nova_compute[257049]: 2026-03-01 10:21:58.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:21:58 np0005634532 nova_compute[257049]: 2026-03-01 10:21:58.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:21:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:21:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:21:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:21:59 np0005634532 nova_compute[257049]: 2026-03-01 10:21:59.000 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:21:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:21:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:21:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1188: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:21:59 np0005634532 nova_compute[257049]: 2026-03-01 10:21:59.861 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:00.449 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:00.474 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1189: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:01 np0005634532 podman[290243]: 2026-03-01 10:22:01.359121572 +0000 UTC m=+0.051486009 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Mar  1 05:22:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:02 np0005634532 nova_compute[257049]: 2026-03-01 10:22:02.193 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:02.452 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:02.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:22:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:22:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1190: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:04.455 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:04.478 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:04 np0005634532 nova_compute[257049]: 2026-03-01 10:22:04.863 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1191: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:06.458 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:06.479 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:07] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:22:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:07] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:22:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:07 np0005634532 nova_compute[257049]: 2026-03-01 10:22:07.239 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1192: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:07.319Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:08.461 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:08.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:08.872Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:22:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1193: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:09 np0005634532 nova_compute[257049]: 2026-03-01 10:22:09.901 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:10.464 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:10.482 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1194: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:12 np0005634532 nova_compute[257049]: 2026-03-01 10:22:12.271 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:12.467 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:12.484 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1195: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:14.470 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:14.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:14 np0005634532 nova_compute[257049]: 2026-03-01 10:22:14.904 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1196: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:22:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:16.473 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:22:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:16.488 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:17] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:17] "GET /metrics HTTP/1.1" 200 48454 "" "Prometheus/2.51.0"
Mar  1 05:22:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1197: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:17.320Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:17 np0005634532 nova_compute[257049]: 2026-03-01 10:22:17.334 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:22:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:22:18
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['volumes', '.nfs', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', 'vms', 'backups', 'images', '.mgr']
Mar  1 05:22:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:22:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:18.476 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:18.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:18.873Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1198: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:22:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:22:19 np0005634532 nova_compute[257049]: 2026-03-01 10:22:19.944 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:22:20 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3383 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1197 writes, 3187 keys, 1197 commit groups, 1.0 writes per commit group, ingest: 2.79 MB, 0.00 MB/s#012Interval WAL: 1197 writes, 546 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Mar  1 05:22:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:20.481 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:20.490 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1199: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1200: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 573 B/s rd, 0 op/s
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:22 np0005634532 nova_compute[257049]: 2026-03-01 10:22:22.337 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:22.483 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:22.491 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.547169219 +0000 UTC m=+0.076908956 container create f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0)
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.490245107 +0000 UTC m=+0.019984874 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:22 np0005634532 systemd[1]: Started libpod-conmon-f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea.scope.
Mar  1 05:22:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.632613674 +0000 UTC m=+0.162353461 container init f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.637742851 +0000 UTC m=+0.167482588 container start f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_REF=squid)
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.640655992 +0000 UTC m=+0.170395739 container attach f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:22:22 np0005634532 stoic_black[290499]: 167 167
Mar  1 05:22:22 np0005634532 systemd[1]: libpod-f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea.scope: Deactivated successfully.
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.642751954 +0000 UTC m=+0.172491701 container died f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Mar  1 05:22:22 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3b6b13d4ac6feb2688b3d5018b530d2147ece35c48915a99c5cde201fbfede51-merged.mount: Deactivated successfully.
Mar  1 05:22:22 np0005634532 podman[290483]: 2026-03-01 10:22:22.681463727 +0000 UTC m=+0.211203464 container remove f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=stoic_black, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1)
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:22 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:22:22 np0005634532 systemd[1]: libpod-conmon-f99dcaa4e9b453067fa43b19f8d37dad997187665a4951659db59b4284b29aea.scope: Deactivated successfully.
Mar  1 05:22:22 np0005634532 podman[290521]: 2026-03-01 10:22:22.802985911 +0000 UTC m=+0.036188152 container create 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 05:22:22 np0005634532 systemd[1]: Started libpod-conmon-250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c.scope.
Mar  1 05:22:22 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:22 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:22 np0005634532 podman[290521]: 2026-03-01 10:22:22.876770369 +0000 UTC m=+0.109972600 container init 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 05:22:22 np0005634532 podman[290521]: 2026-03-01 10:22:22.787434838 +0000 UTC m=+0.020637119 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:22 np0005634532 podman[290521]: 2026-03-01 10:22:22.885363161 +0000 UTC m=+0.118565442 container start 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:22:22 np0005634532 podman[290521]: 2026-03-01 10:22:22.88898573 +0000 UTC m=+0.122187981 container attach 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:22:23 np0005634532 zealous_hodgkin[290537]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:22:23 np0005634532 zealous_hodgkin[290537]: --> All data devices are unavailable
Mar  1 05:22:23 np0005634532 systemd[1]: libpod-250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c.scope: Deactivated successfully.
Mar  1 05:22:23 np0005634532 conmon[290537]: conmon 250e3ee7f5d567054af1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c.scope/container/memory.events
Mar  1 05:22:23 np0005634532 podman[290552]: 2026-03-01 10:22:23.194311211 +0000 UTC m=+0.020955307 container died 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 05:22:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-ab2cd2b97dcfe05520e69c4cf1653c4a38936b2ce308b71f423d76cbb3dca3b6-merged.mount: Deactivated successfully.
Mar  1 05:22:23 np0005634532 podman[290552]: 2026-03-01 10:22:23.233225159 +0000 UTC m=+0.059869245 container remove 250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zealous_hodgkin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:22:23 np0005634532 systemd[1]: libpod-conmon-250e3ee7f5d567054af10b4f15b51f992b676390bd30ab635895c36e64d6d94c.scope: Deactivated successfully.
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.75486701 +0000 UTC m=+0.054560645 container create fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:22:23 np0005634532 systemd[1]: Started libpod-conmon-fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa.scope.
Mar  1 05:22:23 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.804109583 +0000 UTC m=+0.103803228 container init fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.808768268 +0000 UTC m=+0.108461913 container start fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 05:22:23 np0005634532 angry_gould[290674]: 167 167
Mar  1 05:22:23 np0005634532 systemd[1]: libpod-fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa.scope: Deactivated successfully.
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.812863929 +0000 UTC m=+0.112557574 container attach fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.813864834 +0000 UTC m=+0.113558489 container died fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, OSD_FLAVOR=default)
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.724271407 +0000 UTC m=+0.023965122 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:23 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7ce9be007e43ac37c47710a5496e080fc9cab1fe8e82b932797e64292ddffb12-merged.mount: Deactivated successfully.
Mar  1 05:22:23 np0005634532 podman[290658]: 2026-03-01 10:22:23.851192723 +0000 UTC m=+0.150886368 container remove fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=angry_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:22:23 np0005634532 systemd[1]: libpod-conmon-fb51b9a73c0d0e49e97ebe5307b099d27801ba54772e7c6e6d8e2b103bc8d1aa.scope: Deactivated successfully.
Mar  1 05:22:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:22:23.897 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:22:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:22:23.897 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:22:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:22:23.898 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:22:23 np0005634532 podman[290701]: 2026-03-01 10:22:23.984482307 +0000 UTC m=+0.042176930 container create 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:22:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1201: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 860 B/s rd, 0 op/s
Mar  1 05:22:24 np0005634532 systemd[1]: Started libpod-conmon-53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059.scope.
Mar  1 05:22:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0830365b0221a177d7712f4e3f474d50fec85a1df658324c31e1cd7d3f0fe91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0830365b0221a177d7712f4e3f474d50fec85a1df658324c31e1cd7d3f0fe91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0830365b0221a177d7712f4e3f474d50fec85a1df658324c31e1cd7d3f0fe91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:24 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0830365b0221a177d7712f4e3f474d50fec85a1df658324c31e1cd7d3f0fe91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:23.96836354 +0000 UTC m=+0.026058183 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:24.066087597 +0000 UTC m=+0.123782240 container init 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:24.07350452 +0000 UTC m=+0.131199143 container start 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:24.083140118 +0000 UTC m=+0.140834761 container attach 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]: {
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:    "0": [
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:        {
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "devices": [
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "/dev/loop3"
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            ],
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "lv_name": "ceph_lv0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "lv_size": "21470642176",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "name": "ceph_lv0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "tags": {
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.cluster_name": "ceph",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.crush_device_class": "",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.encrypted": "0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.osd_id": "0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.type": "block",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.vdo": "0",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:                "ceph.with_tpm": "0"
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            },
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "type": "block",
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:            "vg_name": "ceph_vg0"
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:        }
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]:    ]
Mar  1 05:22:24 np0005634532 focused_archimedes[290717]: }
Mar  1 05:22:24 np0005634532 systemd[1]: libpod-53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059.scope: Deactivated successfully.
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:24.338512009 +0000 UTC m=+0.396206642 container died 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid)
Mar  1 05:22:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d0830365b0221a177d7712f4e3f474d50fec85a1df658324c31e1cd7d3f0fe91-merged.mount: Deactivated successfully.
Mar  1 05:22:24 np0005634532 podman[290701]: 2026-03-01 10:22:24.383035726 +0000 UTC m=+0.440730349 container remove 53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=focused_archimedes, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 05:22:24 np0005634532 systemd[1]: libpod-conmon-53ac545127a50dbc09420f3bae3d4acbe485ec9b68fff880872325ea3482a059.scope: Deactivated successfully.
Mar  1 05:22:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:24.486 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:24.493 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.827981427 +0000 UTC m=+0.037014543 container create 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:22:24 np0005634532 systemd[1]: Started libpod-conmon-989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487.scope.
Mar  1 05:22:24 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.812706651 +0000 UTC m=+0.021739827 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.911398682 +0000 UTC m=+0.120431888 container init 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1)
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.917071072 +0000 UTC m=+0.126104188 container start 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.920447375 +0000 UTC m=+0.129480491 container attach 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325)
Mar  1 05:22:24 np0005634532 adoring_murdock[290846]: 167 167
Mar  1 05:22:24 np0005634532 systemd[1]: libpod-989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487.scope: Deactivated successfully.
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.921706276 +0000 UTC m=+0.130739432 container died 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 05:22:24 np0005634532 nova_compute[257049]: 2026-03-01 10:22:24.948 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:24 np0005634532 systemd[1]: var-lib-containers-storage-overlay-13f5761536ca5a27ec08ba7fcf899842d8e1ba9f1cea87ba42afd4f8bf301d65-merged.mount: Deactivated successfully.
Mar  1 05:22:24 np0005634532 podman[290829]: 2026-03-01 10:22:24.967273259 +0000 UTC m=+0.176306385 container remove 989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=adoring_murdock, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:22:24 np0005634532 systemd[1]: libpod-conmon-989b79d85892a4cb0c4859ce7dd41f3ed725fe1f5461ce270d5ea6dc75715487.scope: Deactivated successfully.
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.111208645 +0000 UTC m=+0.046552868 container create 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True)
Mar  1 05:22:25 np0005634532 systemd[1]: Started libpod-conmon-3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65.scope.
Mar  1 05:22:25 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:22:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ae534d32ccb17235e332ad2104015d806ee8f4f844f2c08ef2b28cbdc0dbf5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ae534d32ccb17235e332ad2104015d806ee8f4f844f2c08ef2b28cbdc0dbf5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ae534d32ccb17235e332ad2104015d806ee8f4f844f2c08ef2b28cbdc0dbf5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:25 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7ae534d32ccb17235e332ad2104015d806ee8f4f844f2c08ef2b28cbdc0dbf5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.088117446 +0000 UTC m=+0.023461479 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.213201107 +0000 UTC m=+0.148545100 container init 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.218472237 +0000 UTC m=+0.153816230 container start 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.228696999 +0000 UTC m=+0.164041032 container attach 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:22:25 np0005634532 lvm[290962]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:22:25 np0005634532 lvm[290962]: VG ceph_vg0 finished
Mar  1 05:22:25 np0005634532 nervous_pare[290887]: {}
Mar  1 05:22:25 np0005634532 systemd[1]: libpod-3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65.scope: Deactivated successfully.
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.869387433 +0000 UTC m=+0.804731466 container died 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:22:25 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b7ae534d32ccb17235e332ad2104015d806ee8f4f844f2c08ef2b28cbdc0dbf5-merged.mount: Deactivated successfully.
Mar  1 05:22:25 np0005634532 podman[290871]: 2026-03-01 10:22:25.906269192 +0000 UTC m=+0.841613185 container remove 3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nervous_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:22:25 np0005634532 systemd[1]: libpod-conmon-3123a01abe3a68df05d3ff88a6be4d9fd74f83ce36a6ae5a6e22628d1626fa65.scope: Deactivated successfully.
Mar  1 05:22:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:22:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:25 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:22:25 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1202: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 573 B/s rd, 0 op/s
Mar  1 05:22:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:26.489 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:26.494 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:26 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:22:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:27.321Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:27 np0005634532 nova_compute[257049]: 2026-03-01 10:22:27.338 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1203: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 573 B/s rd, 0 op/s
Mar  1 05:22:28 np0005634532 podman[291007]: 2026-03-01 10:22:28.422668764 +0000 UTC m=+0.101719147 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller)
Mar  1 05:22:28 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:22:28 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:22:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:28.492 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:28.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:28.874Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:22:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:28.874Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:29 np0005634532 nova_compute[257049]: 2026-03-01 10:22:29.951 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1204: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 573 B/s rd, 0 op/s
Mar  1 05:22:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:30.495 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:30.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1205: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 573 B/s rd, 0 op/s
Mar  1 05:22:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:32 np0005634532 nova_compute[257049]: 2026-03-01 10:22:32.340 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:32 np0005634532 podman[291041]: 2026-03-01 10:22:32.392030771 +0000 UTC m=+0.081556721 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Mar  1 05:22:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:32.497 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:32.498 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:22:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:22:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1206: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:34.500 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:34.503 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:34 np0005634532 nova_compute[257049]: 2026-03-01 10:22:34.953 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1207: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:36.502 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:36.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:37.322Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:37 np0005634532 nova_compute[257049]: 2026-03-01 10:22:37.342 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1208: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:38.504 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:38.509 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:38.875Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:39 np0005634532 nova_compute[257049]: 2026-03-01 10:22:39.957 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1209: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:40.506 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:40.512 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1210: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:42 np0005634532 nova_compute[257049]: 2026-03-01 10:22:42.344 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:22:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:42.508 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:22:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:42.515 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1211: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:44.511 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:22:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:44.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:22:44 np0005634532 nova_compute[257049]: 2026-03-01 10:22:44.960 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1212: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #72. Immutable memtables: 0.
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.429675) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 39] Flushing memtable with next log file: 72
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566429712, "job": 39, "event": "flush_started", "num_memtables": 1, "num_entries": 1212, "num_deletes": 509, "total_data_size": 1616728, "memory_usage": 1644704, "flush_reason": "Manual Compaction"}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 39] Level-0 flush table #73: started
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566457848, "cf_name": "default", "job": 39, "event": "table_file_creation", "file_number": 73, "file_size": 1590576, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 33385, "largest_seqno": 34596, "table_properties": {"data_size": 1585177, "index_size": 2347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14464, "raw_average_key_size": 18, "raw_value_size": 1572344, "raw_average_value_size": 2018, "num_data_blocks": 103, "num_entries": 779, "num_filter_entries": 779, "num_deletions": 509, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360483, "oldest_key_time": 1772360483, "file_creation_time": 1772360566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 73, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 39] Flush lasted 28227 microseconds, and 2824 cpu microseconds.
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.457898) [db/flush_job.cc:967] [default] [JOB 39] Level-0 flush table #73: 1590576 bytes OK
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.457917) [db/memtable_list.cc:519] [default] Level-0 commit table #73 started
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.464986) [db/memtable_list.cc:722] [default] Level-0 commit table #73: memtable #1 done
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.465046) EVENT_LOG_v1 {"time_micros": 1772360566465037, "job": 39, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.465065) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 39] Try to delete WAL files size 1610231, prev total WAL file size 1610231, number of live WAL files 2.
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000069.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.465513) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032373631' seq:72057594037927935, type:22 .. '7061786F730033303133' seq:0, type:0; will stop at (end)
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 40] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 39 Base level 0, inputs: [73(1553KB)], [71(15MB)]
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566465563, "job": 40, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [73], "files_L6": [71], "score": -1, "input_data_size": 17387046, "oldest_snapshot_seqno": -1}
Mar  1 05:22:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:46.513 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:46.520 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 40] Generated table #74: 6501 keys, 15115238 bytes, temperature: kUnknown
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566561380, "cf_name": "default", "job": 40, "event": "table_file_creation", "file_number": 74, "file_size": 15115238, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 15071144, "index_size": 26768, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 171076, "raw_average_key_size": 26, "raw_value_size": 14953165, "raw_average_value_size": 2300, "num_data_blocks": 1058, "num_entries": 6501, "num_filter_entries": 6501, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360566, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 74, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.561608) [db/compaction/compaction_job.cc:1663] [default] [JOB 40] Compacted 1@0 + 1@6 files to L6 => 15115238 bytes
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.562707) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.3 rd, 157.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 15.1 +0.0 blob) out(14.4 +0.0 blob), read-write-amplify(20.4) write-amplify(9.5) OK, records in: 7534, records dropped: 1033 output_compression: NoCompression
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.562723) EVENT_LOG_v1 {"time_micros": 1772360566562715, "job": 40, "event": "compaction_finished", "compaction_time_micros": 95883, "compaction_time_cpu_micros": 40220, "output_level": 6, "num_output_files": 1, "total_output_size": 15115238, "num_input_records": 7534, "num_output_records": 6501, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000073.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566562922, "job": 40, "event": "table_file_deletion", "file_number": 73}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000071.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360566563853, "job": 40, "event": "table_file_deletion", "file_number": 71}
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.465450) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.563947) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.563953) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.563955) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.563957) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:46 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:22:46.563959) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:22:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:22:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:47.323Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:47 np0005634532 nova_compute[257049]: 2026-03-01 10:22:47.346 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:22:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:22:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:22:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1213: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:48.514 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:48.522 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:48.876Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:48 np0005634532 nova_compute[257049]: 2026-03-01 10:22:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:48 np0005634532 nova_compute[257049]: 2026-03-01 10:22:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:50 np0005634532 nova_compute[257049]: 2026-03-01 10:22:50.016 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1214: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:50.517 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:50.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:51 np0005634532 nova_compute[257049]: 2026-03-01 10:22:51.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1215: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:52 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.348 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:52.519 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:22:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:52.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:22:52 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:52 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:22:52 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.999 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.999 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:52.999 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.000 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.000 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:22:53 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:22:53 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1906891712' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.431 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.561 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.562 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4494MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.563 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.563 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.626 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.626 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:22:53 np0005634532 nova_compute[257049]: 2026-03-01 10:22:53.647 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:22:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1216: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:22:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:22:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3493727062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:22:54 np0005634532 nova_compute[257049]: 2026-03-01 10:22:54.113 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:22:54 np0005634532 nova_compute[257049]: 2026-03-01 10:22:54.119 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:22:54 np0005634532 nova_compute[257049]: 2026-03-01 10:22:54.137 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:22:54 np0005634532 nova_compute[257049]: 2026-03-01 10:22:54.139 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:22:54 np0005634532 nova_compute[257049]: 2026-03-01 10:22:54.139 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:22:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:54.521 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:54.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:55 np0005634532 nova_compute[257049]: 2026-03-01 10:22:55.020 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1217: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:56.523 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:22:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:56.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:22:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:22:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:22:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:22:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:22:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:57.323Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:57 np0005634532 nova_compute[257049]: 2026-03-01 10:22:57.352 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:22:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1218: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:22:58 np0005634532 nova_compute[257049]: 2026-03-01 10:22:58.140 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:22:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463348922' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:22:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:22:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2463348922' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:22:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:22:58.525 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:22:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:22:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:22:58.535 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:22:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:22:58.877Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:22:58 np0005634532 nova_compute[257049]: 2026-03-01 10:22:58.972 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:58 np0005634532 nova_compute[257049]: 2026-03-01 10:22:58.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:22:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:22:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:22:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:22:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:22:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:22:59 np0005634532 podman[291159]: 2026-03-01 10:22:59.427198658 +0000 UTC m=+0.115266261 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.43.0)
Mar  1 05:23:00 np0005634532 nova_compute[257049]: 2026-03-01 10:23:00.022 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1219: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:00.527 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:00.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:00 np0005634532 nova_compute[257049]: 2026-03-01 10:23:00.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:00 np0005634532 nova_compute[257049]: 2026-03-01 10:23:00.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:23:00 np0005634532 nova_compute[257049]: 2026-03-01 10:23:00.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:23:01 np0005634532 nova_compute[257049]: 2026-03-01 10:23:01.005 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:23:02 np0005634532 nova_compute[257049]: 2026-03-01 10:23:02.001 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1220: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:02 np0005634532 nova_compute[257049]: 2026-03-01 10:23:02.354 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:02.528 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:02.541 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:23:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:23:03 np0005634532 podman[291215]: 2026-03-01 10:23:03.390103838 +0000 UTC m=+0.068398206 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb)
Mar  1 05:23:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1221: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:04.530 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:04.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:05 np0005634532 nova_compute[257049]: 2026-03-01 10:23:05.024 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1222: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:06.532 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:06.547 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:23:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:23:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:07.324Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:07 np0005634532 nova_compute[257049]: 2026-03-01 10:23:07.355 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1223: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:08.534 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:08.550 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:08.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:10 np0005634532 nova_compute[257049]: 2026-03-01 10:23:10.028 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1224: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:10.536 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:10.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1225: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:12 np0005634532 nova_compute[257049]: 2026-03-01 10:23:12.357 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:12.538 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:12.556 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1226: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:14.540 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:14.558 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:15 np0005634532 nova_compute[257049]: 2026-03-01 10:23:15.031 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1227: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:16.542 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:23:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:16.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:23:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:17] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:23:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:17.325Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:17 np0005634532 nova_compute[257049]: 2026-03-01 10:23:17.360 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:23:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1228: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:23:18
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'backups', 'images', 'volumes', 'vms', '.nfs', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta']
Mar  1 05:23:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:23:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:18.544 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:18.564 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:18.878Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:23:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:23:20 np0005634532 nova_compute[257049]: 2026-03-01 10:23:20.034 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1229: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:20.545 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:23:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:20.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:23:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1230: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:22 np0005634532 nova_compute[257049]: 2026-03-01 10:23:22.363 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:22.546 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:22.569 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:23:23.897 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:23:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:23:23.898 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:23:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:23:23.898 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:23:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1231: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:23:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:24.549 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:23:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:24.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:25 np0005634532 nova_compute[257049]: 2026-03-01 10:23:25.038 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1232: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:26.551 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:26.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1233: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:23:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:23:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:27] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:27.327Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:27 np0005634532 nova_compute[257049]: 2026-03-01 10:23:27.363 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:27 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.607877206 +0000 UTC m=+0.047737667 container create 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:23:27 np0005634532 systemd[1]: Started libpod-conmon-650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217.scope.
Mar  1 05:23:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.590633621 +0000 UTC m=+0.030494092 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.686578115 +0000 UTC m=+0.126438626 container init 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.696326875 +0000 UTC m=+0.136187356 container start 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.700053737 +0000 UTC m=+0.139914208 container attach 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:23:27 np0005634532 objective_mestorf[291478]: 167 167
Mar  1 05:23:27 np0005634532 systemd[1]: libpod-650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217.scope: Deactivated successfully.
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.702363514 +0000 UTC m=+0.142223985 container died 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:23:27 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d4f0fb604d719a0920336659746b7abf7933e65cb9aa340c8cd670eea3c31113-merged.mount: Deactivated successfully.
Mar  1 05:23:27 np0005634532 podman[291462]: 2026-03-01 10:23:27.744972953 +0000 UTC m=+0.184833434 container remove 650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:23:27 np0005634532 systemd[1]: libpod-conmon-650fdc0c24355612c64e2d32382e0d76e058242b9ee4e5e209cf0b9427a79217.scope: Deactivated successfully.
Mar  1 05:23:27 np0005634532 podman[291501]: 2026-03-01 10:23:27.901612062 +0000 UTC m=+0.044992419 container create 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 05:23:27 np0005634532 systemd[1]: Started libpod-conmon-035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d.scope.
Mar  1 05:23:27 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:27 np0005634532 podman[291501]: 2026-03-01 10:23:27.881793914 +0000 UTC m=+0.025174261 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:27 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:27 np0005634532 podman[291501]: 2026-03-01 10:23:27.993082466 +0000 UTC m=+0.136462883 container init 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 05:23:27 np0005634532 podman[291501]: 2026-03-01 10:23:27.999069913 +0000 UTC m=+0.142450270 container start 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Mar  1 05:23:28 np0005634532 podman[291501]: 2026-03-01 10:23:28.002381595 +0000 UTC m=+0.145762032 container attach 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid)
Mar  1 05:23:28 np0005634532 keen_fermi[291519]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:23:28 np0005634532 keen_fermi[291519]: --> All data devices are unavailable
Mar  1 05:23:28 np0005634532 systemd[1]: libpod-035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d.scope: Deactivated successfully.
Mar  1 05:23:28 np0005634532 podman[291501]: 2026-03-01 10:23:28.342109674 +0000 UTC m=+0.485490011 container died 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:23:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8c06477dc029e4ac0eb51f10a6c2d26adce2cedfd954cd678eab619a800dadba-merged.mount: Deactivated successfully.
Mar  1 05:23:28 np0005634532 podman[291501]: 2026-03-01 10:23:28.38983261 +0000 UTC m=+0.533212947 container remove 035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=keen_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 05:23:28 np0005634532 systemd[1]: libpod-conmon-035bb0d9817fae946517fb11edac83240d6a37ebb7676425a1ac734ccd0b121d.scope: Deactivated successfully.
Mar  1 05:23:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:28.553 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:28.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.852954419 +0000 UTC m=+0.034281335 container create e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, ceph=True)
Mar  1 05:23:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:28.880Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:28 np0005634532 systemd[1]: Started libpod-conmon-e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056.scope.
Mar  1 05:23:28 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.91470455 +0000 UTC m=+0.096031506 container init e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.919431437 +0000 UTC m=+0.100758373 container start e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 05:23:28 np0005634532 nifty_sutherland[291657]: 167 167
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.922508213 +0000 UTC m=+0.103835139 container attach e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:23:28 np0005634532 systemd[1]: libpod-e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056.scope: Deactivated successfully.
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.923667131 +0000 UTC m=+0.104994047 container died e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.838147284 +0000 UTC m=+0.019474220 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:28 np0005634532 systemd[1]: var-lib-containers-storage-overlay-482499b3b3891b355bf9edf5892b4a3039943ec493e9777f5b9552ec3b4dae8f-merged.mount: Deactivated successfully.
Mar  1 05:23:28 np0005634532 podman[291640]: 2026-03-01 10:23:28.958310345 +0000 UTC m=+0.139637291 container remove e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=nifty_sutherland, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 05:23:28 np0005634532 systemd[1]: libpod-conmon-e31f5dd5347a9a0b6ef64b5d728b2282c561761dfed26511fc38c085235f2056.scope: Deactivated successfully.
Mar  1 05:23:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1234: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 839 B/s rd, 0 op/s
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.118479941 +0000 UTC m=+0.058306698 container create 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:23:29 np0005634532 systemd[1]: Started libpod-conmon-781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695.scope.
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.096037978 +0000 UTC m=+0.035864725 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:29 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c2e3d81ca9ddb50e30de7de213e7b05428c6d06ae9079f438965ab3c3d45c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c2e3d81ca9ddb50e30de7de213e7b05428c6d06ae9079f438965ab3c3d45c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c2e3d81ca9ddb50e30de7de213e7b05428c6d06ae9079f438965ab3c3d45c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:29 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9c2e3d81ca9ddb50e30de7de213e7b05428c6d06ae9079f438965ab3c3d45c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.224607065 +0000 UTC m=+0.164433882 container init 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325)
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.23131978 +0000 UTC m=+0.171146507 container start 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.23456187 +0000 UTC m=+0.174388637 container attach 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]: {
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:    "0": [
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:        {
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "devices": [
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "/dev/loop3"
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            ],
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "lv_name": "ceph_lv0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "lv_size": "21470642176",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "name": "ceph_lv0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "tags": {
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.cluster_name": "ceph",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.crush_device_class": "",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.encrypted": "0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.osd_id": "0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.type": "block",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.vdo": "0",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:                "ceph.with_tpm": "0"
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            },
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "type": "block",
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:            "vg_name": "ceph_vg0"
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:        }
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]:    ]
Mar  1 05:23:29 np0005634532 distracted_williamson[291695]: }
Mar  1 05:23:29 np0005634532 systemd[1]: libpod-781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695.scope: Deactivated successfully.
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.501467786 +0000 UTC m=+0.441294563 container died 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, OSD_FLAVOR=default)
Mar  1 05:23:29 np0005634532 systemd[1]: var-lib-containers-storage-overlay-d9c2e3d81ca9ddb50e30de7de213e7b05428c6d06ae9079f438965ab3c3d45c6-merged.mount: Deactivated successfully.
Mar  1 05:23:29 np0005634532 podman[291679]: 2026-03-01 10:23:29.543610334 +0000 UTC m=+0.483437061 container remove 781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=distracted_williamson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:23:29 np0005634532 systemd[1]: libpod-conmon-781157c4cee15ebb63e6a52a26a9b58597c86510ecab8a567b1dfe32e4ac0695.scope: Deactivated successfully.
Mar  1 05:23:29 np0005634532 podman[291704]: 2026-03-01 10:23:29.643558456 +0000 UTC m=+0.107004097 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223, tcib_managed=true)
Mar  1 05:23:29 np0005634532 podman[291838]: 2026-03-01 10:23:29.998852679 +0000 UTC m=+0.035772272 container create 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, ceph=True, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Mar  1 05:23:30 np0005634532 systemd[1]: Started libpod-conmon-5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5.scope.
Mar  1 05:23:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:30 np0005634532 nova_compute[257049]: 2026-03-01 10:23:30.040 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:30.05042531 +0000 UTC m=+0.087344823 container init 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:30.055199687 +0000 UTC m=+0.092119190 container start 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Mar  1 05:23:30 np0005634532 amazing_margulis[291854]: 167 167
Mar  1 05:23:30 np0005634532 systemd[1]: libpod-5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5.scope: Deactivated successfully.
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:30.06018685 +0000 UTC m=+0.097106433 container attach 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 05:23:30 np0005634532 conmon[291854]: conmon 5da55e4fefc5bcce2ed4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5.scope/container/memory.events
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:30.063208085 +0000 UTC m=+0.100127598 container died 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:29.981902132 +0000 UTC m=+0.018821665 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9149516b3cd24e0a7ca3ad3c3c0295b554d1c4ae46856d3358777fc18109cee4-merged.mount: Deactivated successfully.
Mar  1 05:23:30 np0005634532 podman[291838]: 2026-03-01 10:23:30.094795013 +0000 UTC m=+0.131714516 container remove 5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_margulis, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Mar  1 05:23:30 np0005634532 systemd[1]: libpod-conmon-5da55e4fefc5bcce2ed4ea33d6b249b49aa835fa3c95d955fb0c0b091a2755b5.scope: Deactivated successfully.
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.208605347 +0000 UTC m=+0.037028694 container create 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:23:30 np0005634532 systemd[1]: Started libpod-conmon-7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3.scope.
Mar  1 05:23:30 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:23:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4a51c6248ec5abb1d5ba6dbb3c4c790710f112f09472f9afc7feccb50b0c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4a51c6248ec5abb1d5ba6dbb3c4c790710f112f09472f9afc7feccb50b0c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4a51c6248ec5abb1d5ba6dbb3c4c790710f112f09472f9afc7feccb50b0c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:30 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93f4a51c6248ec5abb1d5ba6dbb3c4c790710f112f09472f9afc7feccb50b0c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.283631035 +0000 UTC m=+0.112054402 container init 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.192816508 +0000 UTC m=+0.021239875 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.292268278 +0000 UTC m=+0.120691635 container start 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.297759283 +0000 UTC m=+0.126182690 container attach 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:23:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:30.555 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:30.579 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:30 np0005634532 lvm[291968]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:23:30 np0005634532 lvm[291968]: VG ceph_vg0 finished
Mar  1 05:23:30 np0005634532 blissful_mendeleev[291894]: {}
Mar  1 05:23:30 np0005634532 systemd[1]: libpod-7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3.scope: Deactivated successfully.
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.905276749 +0000 UTC m=+0.733700116 container died 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, CEPH_REF=squid, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Mar  1 05:23:30 np0005634532 systemd[1]: var-lib-containers-storage-overlay-93f4a51c6248ec5abb1d5ba6dbb3c4c790710f112f09472f9afc7feccb50b0c0-merged.mount: Deactivated successfully.
Mar  1 05:23:30 np0005634532 podman[291877]: 2026-03-01 10:23:30.949427237 +0000 UTC m=+0.777850684 container remove 7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_mendeleev, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:23:30 np0005634532 systemd[1]: libpod-conmon-7684679579db49daae340663890e0a7107bd089d0aec75288f70fab5b5fa75d3.scope: Deactivated successfully.
Mar  1 05:23:30 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:23:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:23:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1235: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Mar  1 05:23:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:23:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:32 np0005634532 nova_compute[257049]: 2026-03-01 10:23:32.366 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:23:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:23:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:32.557 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:32.582 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1236: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Mar  1 05:23:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:34 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000009 to be held by another RGW process; skipping for now
Mar  1 05:23:34 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000011 to be held by another RGW process; skipping for now
Mar  1 05:23:34 np0005634532 radosgw[91037]: INFO: RGWReshardLock::lock found lock on reshard.0000000013 to be held by another RGW process; skipping for now
Mar  1 05:23:34 np0005634532 podman[292011]: 2026-03-01 10:23:34.366867158 +0000 UTC m=+0.060108802 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20260223)
Mar  1 05:23:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:34.559 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:34.585 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1237: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Mar  1 05:23:35 np0005634532 nova_compute[257049]: 2026-03-01 10:23:35.042 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:36.561 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:36.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1238: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 559 B/s rd, 0 op/s
Mar  1 05:23:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:23:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:37] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:23:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:37.328Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:37 np0005634532 nova_compute[257049]: 2026-03-01 10:23:37.367 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:38.563 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:38.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:38.881Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1239: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:40 np0005634532 nova_compute[257049]: 2026-03-01 10:23:40.045 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:40.566 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:23:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:40.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:23:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1240: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:42 np0005634532 nova_compute[257049]: 2026-03-01 10:23:42.369 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:42.568 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:42.596 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1241: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:44.570 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:44.598 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1242: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:45 np0005634532 nova_compute[257049]: 2026-03-01 10:23:45.047 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:46.572 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:23:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:46.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1243: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:47] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:23:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:47.329Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:23:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:47.329Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:23:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:47.329Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:23:47 np0005634532 nova_compute[257049]: 2026-03-01 10:23:47.418 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:23:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:23:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:23:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:48.574 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:48.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:48.882Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:48 np0005634532 nova_compute[257049]: 2026-03-01 10:23:48.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1244: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 104 KiB/s rd, 0 B/s wr, 172 op/s
Mar  1 05:23:50 np0005634532 nova_compute[257049]: 2026-03-01 10:23:50.050 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:50.576 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:50.608 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:50 np0005634532 nova_compute[257049]: 2026-03-01 10:23:50.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1245: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:51 np0005634532 nova_compute[257049]: 2026-03-01 10:23:51.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #75. Immutable memtables: 0.
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.211615) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 41] Flushing memtable with next log file: 75
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632211655, "job": 41, "event": "flush_started", "num_memtables": 1, "num_entries": 826, "num_deletes": 252, "total_data_size": 1262539, "memory_usage": 1277064, "flush_reason": "Manual Compaction"}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 41] Level-0 flush table #76: started
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632217842, "cf_name": "default", "job": 41, "event": "table_file_creation", "file_number": 76, "file_size": 803895, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 34598, "largest_seqno": 35422, "table_properties": {"data_size": 800450, "index_size": 1225, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 9351, "raw_average_key_size": 20, "raw_value_size": 793050, "raw_average_value_size": 1762, "num_data_blocks": 53, "num_entries": 450, "num_filter_entries": 450, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360567, "oldest_key_time": 1772360567, "file_creation_time": 1772360632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 76, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 41] Flush lasted 6271 microseconds, and 2703 cpu microseconds.
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.217884) [db/flush_job.cc:967] [default] [JOB 41] Level-0 flush table #76: 803895 bytes OK
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.217901) [db/memtable_list.cc:519] [default] Level-0 commit table #76 started
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.219407) [db/memtable_list.cc:722] [default] Level-0 commit table #76: memtable #1 done
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.219421) EVENT_LOG_v1 {"time_micros": 1772360632219417, "job": 41, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.219437) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 41] Try to delete WAL files size 1258533, prev total WAL file size 1258533, number of live WAL files 2.
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000072.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.219866) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303030' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 42] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 41 Base level 0, inputs: [76(785KB)], [74(14MB)]
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632219929, "job": 42, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [76], "files_L6": [74], "score": -1, "input_data_size": 15919133, "oldest_snapshot_seqno": -1}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 42] Generated table #77: 6460 keys, 12244305 bytes, temperature: kUnknown
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632282852, "cf_name": "default", "job": 42, "event": "table_file_creation", "file_number": 77, "file_size": 12244305, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12204473, "index_size": 22570, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 170409, "raw_average_key_size": 26, "raw_value_size": 12091160, "raw_average_value_size": 1871, "num_data_blocks": 884, "num_entries": 6460, "num_filter_entries": 6460, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360632, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 77, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.283195) [db/compaction/compaction_job.cc:1663] [default] [JOB 42] Compacted 1@0 + 1@6 files to L6 => 12244305 bytes
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.284761) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 252.7 rd, 194.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 14.4 +0.0 blob) out(11.7 +0.0 blob), read-write-amplify(35.0) write-amplify(15.2) OK, records in: 6951, records dropped: 491 output_compression: NoCompression
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.284794) EVENT_LOG_v1 {"time_micros": 1772360632284779, "job": 42, "event": "compaction_finished", "compaction_time_micros": 63005, "compaction_time_cpu_micros": 39022, "output_level": 6, "num_output_files": 1, "total_output_size": 12244305, "num_input_records": 6951, "num_output_records": 6460, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000076.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632285122, "job": 42, "event": "table_file_deletion", "file_number": 76}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000074.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360632288165, "job": 42, "event": "table_file_deletion", "file_number": 74}
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.219755) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.288247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.288253) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.288255) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.288257) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:23:52.288259) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:23:52 np0005634532 nova_compute[257049]: 2026-03-01 10:23:52.418 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:52.578 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:52.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1246: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:53 np0005634532 nova_compute[257049]: 2026-03-01 10:23:53.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:53 np0005634532 nova_compute[257049]: 2026-03-01 10:23:53.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:23:53 np0005634532 nova_compute[257049]: 2026-03-01 10:23:53.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:23:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.012 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.013 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.013 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.013 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.013 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:23:54 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:23:54 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1815538224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.423 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.410s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.574 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.575 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4494MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.575 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.576 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:23:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:54.580 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:54.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.641 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.642 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:23:54 np0005634532 nova_compute[257049]: 2026-03-01 10:23:54.661 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:23:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1247: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:23:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:23:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331865809' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.053 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.055 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.394s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.060 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.078 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.080 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:23:55 np0005634532 nova_compute[257049]: 2026-03-01 10:23:55.081 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:23:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:56.581 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:23:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:56.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:23:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1248: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:23:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:23:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:23:57] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:23:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:23:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:57.330Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:57 np0005634532 nova_compute[257049]: 2026-03-01 10:23:57.469 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:23:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:23:58.584 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:23:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:23:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:23:58.618 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:23:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:23:58.883Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:23:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:23:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:23:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:23:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:23:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:23:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1249: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:00 np0005634532 podman[292127]: 2026-03-01 10:24:00.04265676 +0000 UTC m=+0.075513572 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.056 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.077 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.077 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.077 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:00.586 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:00.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:24:00 np0005634532 nova_compute[257049]: 2026-03-01 10:24:00.998 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:24:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1250: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:02 np0005634532 nova_compute[257049]: 2026-03-01 10:24:02.471 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:24:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:24:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:02.588 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:02.623 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1251: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:04.589 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:04.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1252: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:05 np0005634532 nova_compute[257049]: 2026-03-01 10:24:05.056 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:05 np0005634532 podman[292185]: 2026-03-01 10:24:05.383218317 +0000 UTC m=+0.068557490 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:24:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:06.591 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:06.629 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1253: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:07.331Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:07.332Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:07.332Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:07 np0005634532 nova_compute[257049]: 2026-03-01 10:24:07.512 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:08.593 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:08.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:08.885Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1254: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:10 np0005634532 nova_compute[257049]: 2026-03-01 10:24:10.058 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:10.595 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:10.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1255: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:12 np0005634532 nova_compute[257049]: 2026-03-01 10:24:12.565 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:12.597 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:24:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:12.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:24:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1256: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:14.599 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:14.641 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1257: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:15 np0005634532 nova_compute[257049]: 2026-03-01 10:24:15.062 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:16.601 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:16.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1258: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:17.333Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:24:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:24:17 np0005634532 nova_compute[257049]: 2026-03-01 10:24:17.566 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:24:18
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['vms', '.mgr', 'default.rgw.meta', 'volumes', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'images', 'default.rgw.control', '.nfs']
Mar  1 05:24:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:24:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:18.603 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:18.647 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:18.886Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:18.887Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:18.887Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1259: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:24:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:24:20 np0005634532 nova_compute[257049]: 2026-03-01 10:24:20.066 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:20.604 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:20.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1260: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:22.606 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:22 np0005634532 nova_compute[257049]: 2026-03-01 10:24:22.615 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:22.653 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1261: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:24:23.899 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:24:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:24:23.899 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:24:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:24:23.899 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:24:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:24.609 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:24.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1262: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:25 np0005634532 nova_compute[257049]: 2026-03-01 10:24:25.071 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:26.611 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:26.659 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1263: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:27] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:24:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:27.334Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:27 np0005634532 nova_compute[257049]: 2026-03-01 10:24:27.615 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:28.613 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:28.662 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:28.888Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:28.888Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1264: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:30 np0005634532 nova_compute[257049]: 2026-03-01 10:24:30.075 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:30 np0005634532 podman[292258]: 2026-03-01 10:24:30.378586854 +0000 UTC m=+0.070875937 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, container_name=ovn_controller, io.buildah.version=1.43.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20260223)
Mar  1 05:24:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:30.615 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:30.664 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1265: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:24:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:31 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:24:31 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1266: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:24:32 np0005634532 nova_compute[257049]: 2026-03-01 10:24:32.618 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:32.617 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:24:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:32.666 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:32 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:24:32 np0005634532 podman[292531]: 2026-03-01 10:24:32.829351209 +0000 UTC m=+0.033639419 container create e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 05:24:32 np0005634532 systemd[1]: Started libpod-conmon-e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2.scope.
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:32.813462098 +0000 UTC m=+0.017750328 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:33.24099532 +0000 UTC m=+0.445283540 container init e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2)
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:33.247013078 +0000 UTC m=+0.451301288 container start e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:33 np0005634532 blissful_curie[292548]: 167 167
Mar  1 05:24:33 np0005634532 systemd[1]: libpod-e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2.scope: Deactivated successfully.
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:33.252917814 +0000 UTC m=+0.457206054 container attach e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:33.253338494 +0000 UTC m=+0.457626704 container died e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9aba61b471c1f6fd63adf6f3250a57e5345b7ed060df189d96808a88a63dfc04-merged.mount: Deactivated successfully.
Mar  1 05:24:33 np0005634532 podman[292531]: 2026-03-01 10:24:33.285067006 +0000 UTC m=+0.489355216 container remove e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=blissful_curie, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:33 np0005634532 systemd[1]: libpod-conmon-e12d0780a80209307775d9857a8e48aa080ddf3a9ab0ca8c12c6a33f7b9475f2.scope: Deactivated successfully.
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.392733528 +0000 UTC m=+0.033169858 container create 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2)
Mar  1 05:24:33 np0005634532 systemd[1]: Started libpod-conmon-73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c.scope.
Mar  1 05:24:33 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:33 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.452648724 +0000 UTC m=+0.093085084 container init 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.459307688 +0000 UTC m=+0.099744018 container start 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.461913223 +0000 UTC m=+0.102349573 container attach 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.379426561 +0000 UTC m=+0.019862911 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:33 np0005634532 upbeat_archimedes[292589]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:24:33 np0005634532 upbeat_archimedes[292589]: --> All data devices are unavailable
Mar  1 05:24:33 np0005634532 systemd[1]: libpod-73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c.scope: Deactivated successfully.
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.727304411 +0000 UTC m=+0.367740741 container died 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:24:33 np0005634532 systemd[1]: var-lib-containers-storage-overlay-2a864e6dfbc8a0ad16a98cd4d5b86b979d623e63722d7c8f2031d6b438cc36dc-merged.mount: Deactivated successfully.
Mar  1 05:24:33 np0005634532 podman[292572]: 2026-03-01 10:24:33.761686838 +0000 UTC m=+0.402123168 container remove 73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=upbeat_archimedes, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:24:33 np0005634532 systemd[1]: libpod-conmon-73ff64821eaf9e38b85638726c312150d079ea46342dd15fb7fba90bb7d57c8c.scope: Deactivated successfully.
Mar  1 05:24:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.235567102 +0000 UTC m=+0.018402864 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1267: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 816 B/s rd, 0 op/s
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.532929558 +0000 UTC m=+0.315765300 container create fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1)
Mar  1 05:24:34 np0005634532 systemd[1]: Started libpod-conmon-fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b.scope.
Mar  1 05:24:34 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.594518395 +0000 UTC m=+0.377354137 container init fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325)
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.599406695 +0000 UTC m=+0.382242437 container start fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.602079811 +0000 UTC m=+0.384915583 container attach fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:24:34 np0005634532 determined_grothendieck[292726]: 167 167
Mar  1 05:24:34 np0005634532 systemd[1]: libpod-fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b.scope: Deactivated successfully.
Mar  1 05:24:34 np0005634532 conmon[292726]: conmon fb902445d2accaef8a49 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b.scope/container/memory.events
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.605500466 +0000 UTC m=+0.388336208 container died fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, CEPH_REF=squid, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1)
Mar  1 05:24:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:34.620 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:34 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b08533da413ec0de114c6ae317427602373633f66f35ff2a961750f11bdb6bda-merged.mount: Deactivated successfully.
Mar  1 05:24:34 np0005634532 podman[292709]: 2026-03-01 10:24:34.637241767 +0000 UTC m=+0.420077509 container remove fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=determined_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:24:34 np0005634532 systemd[1]: libpod-conmon-fb902445d2accaef8a4965a2fb0ff2705f8ff96a3cc346d774adc822360d339b.scope: Deactivated successfully.
Mar  1 05:24:34 np0005634532 podman[292750]: 2026-03-01 10:24:34.767761503 +0000 UTC m=+0.037607288 container create 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, ceph=True, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:24:34 np0005634532 systemd[1]: Started libpod-conmon-46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58.scope.
Mar  1 05:24:34 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b1227b482931b34d6691faa4f32293ba1e262c8b41b317f09bf2057a7726e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b1227b482931b34d6691faa4f32293ba1e262c8b41b317f09bf2057a7726e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b1227b482931b34d6691faa4f32293ba1e262c8b41b317f09bf2057a7726e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:34 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e8b1227b482931b34d6691faa4f32293ba1e262c8b41b317f09bf2057a7726e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:34 np0005634532 podman[292750]: 2026-03-01 10:24:34.84232192 +0000 UTC m=+0.112167485 container init 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Mar  1 05:24:34 np0005634532 podman[292750]: 2026-03-01 10:24:34.752838715 +0000 UTC m=+0.022684260 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:34 np0005634532 podman[292750]: 2026-03-01 10:24:34.850940182 +0000 UTC m=+0.120785727 container start 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default)
Mar  1 05:24:34 np0005634532 podman[292750]: 2026-03-01 10:24:34.855176596 +0000 UTC m=+0.125022141 container attach 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:24:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:35.689 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:35 np0005634532 nova_compute[257049]: 2026-03-01 10:24:35.691 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:35 np0005634532 confident_panini[292766]: {
Mar  1 05:24:35 np0005634532 confident_panini[292766]:    "0": [
Mar  1 05:24:35 np0005634532 confident_panini[292766]:        {
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "devices": [
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "/dev/loop3"
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            ],
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "lv_name": "ceph_lv0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "lv_size": "21470642176",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "name": "ceph_lv0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "tags": {
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.cluster_name": "ceph",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.crush_device_class": "",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.encrypted": "0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.osd_id": "0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.type": "block",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.vdo": "0",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:                "ceph.with_tpm": "0"
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            },
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "type": "block",
Mar  1 05:24:35 np0005634532 confident_panini[292766]:            "vg_name": "ceph_vg0"
Mar  1 05:24:35 np0005634532 confident_panini[292766]:        }
Mar  1 05:24:35 np0005634532 confident_panini[292766]:    ]
Mar  1 05:24:35 np0005634532 confident_panini[292766]: }
Mar  1 05:24:35 np0005634532 systemd[1]: libpod-46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58.scope: Deactivated successfully.
Mar  1 05:24:35 np0005634532 podman[292750]: 2026-03-01 10:24:35.751848626 +0000 UTC m=+1.021694171 container died 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Mar  1 05:24:35 np0005634532 systemd[1]: var-lib-containers-storage-overlay-e8b1227b482931b34d6691faa4f32293ba1e262c8b41b317f09bf2057a7726e7-merged.mount: Deactivated successfully.
Mar  1 05:24:35 np0005634532 podman[292750]: 2026-03-01 10:24:35.799202692 +0000 UTC m=+1.069048227 container remove 46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=confident_panini, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325)
Mar  1 05:24:35 np0005634532 systemd[1]: libpod-conmon-46a48d1150fc9c07ca5d5beacdccc6831e219625329b5412b49b7aaa294dff58.scope: Deactivated successfully.
Mar  1 05:24:35 np0005634532 podman[292777]: 2026-03-01 10:24:35.870738264 +0000 UTC m=+0.082486213 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.43.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Mar  1 05:24:36 np0005634532 podman[292900]: 2026-03-01 10:24:36.325512098 +0000 UTC m=+0.050776422 container create 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, ceph=True, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:24:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1268: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Mar  1 05:24:36 np0005634532 systemd[1]: Started libpod-conmon-0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a.scope.
Mar  1 05:24:36 np0005634532 podman[292900]: 2026-03-01 10:24:36.298213956 +0000 UTC m=+0.023478380 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:36 np0005634532 podman[292900]: 2026-03-01 10:24:36.41245626 +0000 UTC m=+0.137720604 container init 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1)
Mar  1 05:24:36 np0005634532 podman[292900]: 2026-03-01 10:24:36.42220295 +0000 UTC m=+0.147467274 container start 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:24:36 np0005634532 podman[292900]: 2026-03-01 10:24:36.42584537 +0000 UTC m=+0.151109724 container attach 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:36 np0005634532 amazing_williamson[292916]: 167 167
Mar  1 05:24:36 np0005634532 systemd[1]: libpod-0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a.scope: Deactivated successfully.
Mar  1 05:24:36 np0005634532 conmon[292916]: conmon 0e16f7df0f9796b75551 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a.scope/container/memory.events
Mar  1 05:24:36 np0005634532 podman[292921]: 2026-03-01 10:24:36.47010444 +0000 UTC m=+0.027265463 container died 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, io.buildah.version=1.40.1)
Mar  1 05:24:36 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7b2e1144f4ed236be580bd29291602f89825d9967db64556e876c2f31efe7157-merged.mount: Deactivated successfully.
Mar  1 05:24:36 np0005634532 podman[292921]: 2026-03-01 10:24:36.508582268 +0000 UTC m=+0.065743281 container remove 0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_williamson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Mar  1 05:24:36 np0005634532 systemd[1]: libpod-conmon-0e16f7df0f9796b75551fe01063ce3b1d21af7f207636620f8fae2defcd77c2a.scope: Deactivated successfully.
Mar  1 05:24:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:36.622 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:36 np0005634532 podman[292943]: 2026-03-01 10:24:36.71405397 +0000 UTC m=+0.067304139 container create 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:24:36 np0005634532 systemd[1]: Started libpod-conmon-730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d.scope.
Mar  1 05:24:36 np0005634532 podman[292943]: 2026-03-01 10:24:36.686622564 +0000 UTC m=+0.039872783 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:24:36 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:24:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c321bfb1cd88e35a43c6736e40514dc6c4f66d15f7494ddc3ce437b015455/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c321bfb1cd88e35a43c6736e40514dc6c4f66d15f7494ddc3ce437b015455/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c321bfb1cd88e35a43c6736e40514dc6c4f66d15f7494ddc3ce437b015455/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:36 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eb7c321bfb1cd88e35a43c6736e40514dc6c4f66d15f7494ddc3ce437b015455/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:24:36 np0005634532 podman[292943]: 2026-03-01 10:24:36.80416228 +0000 UTC m=+0.157412439 container init 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:24:36 np0005634532 podman[292943]: 2026-03-01 10:24:36.811123031 +0000 UTC m=+0.164373190 container start 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, ceph=True, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:24:36 np0005634532 podman[292943]: 2026-03-01 10:24:36.815473689 +0000 UTC m=+0.168723828 container attach 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:24:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:24:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:37] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:37.334Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:37.335Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:37.335Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:37 np0005634532 lvm[293033]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:24:37 np0005634532 lvm[293033]: VG ceph_vg0 finished
Mar  1 05:24:37 np0005634532 zen_tharp[292959]: {}
Mar  1 05:24:37 np0005634532 systemd[1]: libpod-730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d.scope: Deactivated successfully.
Mar  1 05:24:37 np0005634532 conmon[292959]: conmon 730ae1abb5d5219f1b90 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d.scope/container/memory.events
Mar  1 05:24:37 np0005634532 podman[292943]: 2026-03-01 10:24:37.485083845 +0000 UTC m=+0.838334004 container died 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:24:37 np0005634532 systemd[1]: var-lib-containers-storage-overlay-eb7c321bfb1cd88e35a43c6736e40514dc6c4f66d15f7494ddc3ce437b015455-merged.mount: Deactivated successfully.
Mar  1 05:24:37 np0005634532 podman[292943]: 2026-03-01 10:24:37.531305184 +0000 UTC m=+0.884555363 container remove 730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=zen_tharp, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325)
Mar  1 05:24:37 np0005634532 systemd[1]: libpod-conmon-730ae1abb5d5219f1b90611cac9b4ed8f418d63af38a517ecee092b871ba4f9d.scope: Deactivated successfully.
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:37 np0005634532 nova_compute[257049]: 2026-03-01 10:24:37.621 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:37.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:37 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:24:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1269: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Mar  1 05:24:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:38.624 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:38.889Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:38.890Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:38.891Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:39.694 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:40 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1270: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Mar  1 05:24:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:40.626 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:40 np0005634532 nova_compute[257049]: 2026-03-01 10:24:40.694 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:24:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:41.696 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:24:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:42 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1271: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 544 B/s rd, 0 op/s
Mar  1 05:24:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:42.628 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:42 np0005634532 nova_compute[257049]: 2026-03-01 10:24:42.660 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:43.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1272: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:44.630 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:45 np0005634532 nova_compute[257049]: 2026-03-01 10:24:45.698 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:45.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1273: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:46.632 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:47] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:24:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:47.335Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:24:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:47.335Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:47.336Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:24:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:24:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:24:47 np0005634532 nova_compute[257049]: 2026-03-01 10:24:47.664 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:47.705 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:24:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:24:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1274: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:24:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:48.634 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:24:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:48.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:49.709 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1275: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:50.636 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:50 np0005634532 nova_compute[257049]: 2026-03-01 10:24:50.702 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:50 np0005634532 nova_compute[257049]: 2026-03-01 10:24:50.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:51.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:51 np0005634532 nova_compute[257049]: 2026-03-01 10:24:51.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1276: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:52.638 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:52 np0005634532 nova_compute[257049]: 2026-03-01 10:24:52.695 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:52 np0005634532 nova_compute[257049]: 2026-03-01 10:24:52.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:53.715 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1277: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:24:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:54.640 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:54 np0005634532 nova_compute[257049]: 2026-03-01 10:24:54.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:54 np0005634532 nova_compute[257049]: 2026-03-01 10:24:54.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:24:55 np0005634532 nova_compute[257049]: 2026-03-01 10:24:55.706 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:55.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:55 np0005634532 nova_compute[257049]: 2026-03-01 10:24:55.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:24:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1278: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:56.642 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:56 np0005634532 nova_compute[257049]: 2026-03-01 10:24:56.738 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:24:56 np0005634532 nova_compute[257049]: 2026-03-01 10:24:56.738 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:24:56 np0005634532 nova_compute[257049]: 2026-03-01 10:24:56.739 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:24:56 np0005634532 nova_compute[257049]: 2026-03-01 10:24:56.739 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:24:56 np0005634532 nova_compute[257049]: 2026-03-01 10:24:56.739 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:24:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:24:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:24:57] "GET /metrics HTTP/1.1" 200 48459 "" "Prometheus/2.51.0"
Mar  1 05:24:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:24:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/237238040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:24:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.221 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:24:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:57.337Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.360 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.361 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4478MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.362 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.362 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.440 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.441 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.459 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:24:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:57.744 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.748 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:24:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:24:57 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1401174068' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.949 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.954 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.972 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.974 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:24:57 np0005634532 nova_compute[257049]: 2026-03-01 10:24:57.975 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:24:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:24:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3567257648' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:24:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:24:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3567257648' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:24:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1279: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:24:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:24:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:24:58.644 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:24:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:24:58.892Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:24:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:24:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:24:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:24:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:24:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:24:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:24:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:24:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:24:59.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1280: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:00.646 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:00 np0005634532 nova_compute[257049]: 2026-03-01 10:25:00.710 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:00 np0005634532 podman[293195]: 2026-03-01 10:25:00.941698976 +0000 UTC m=+0.098185690 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:01.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.975 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.998 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.998 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:01 np0005634532 nova_compute[257049]: 2026-03-01 10:25:01.998 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1281: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:25:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:25:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:02.648 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:02 np0005634532 nova_compute[257049]: 2026-03-01 10:25:02.785 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:03.752 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1282: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:04.650 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:05 np0005634532 nova_compute[257049]: 2026-03-01 10:25:05.711 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:05.756 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:05 np0005634532 nova_compute[257049]: 2026-03-01 10:25:05.994 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1283: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:06 np0005634532 podman[293229]: 2026-03-01 10:25:06.36818689 +0000 UTC m=+0.054111754 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0)
Mar  1 05:25:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:06.652 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:25:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:07] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:25:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:07.338Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:07.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:07 np0005634532 nova_compute[257049]: 2026-03-01 10:25:07.786 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1284: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:25:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:08.654 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:25:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:08.893Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:08.893Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:09.762 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1285: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:10.656 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:10 np0005634532 nova_compute[257049]: 2026-03-01 10:25:10.713 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:11.764 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1286: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:12.658 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:12 np0005634532 nova_compute[257049]: 2026-03-01 10:25:12.829 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:25:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:13.767 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:25:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1287: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:14.660 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:15 np0005634532 nova_compute[257049]: 2026-03-01 10:25:15.717 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:15.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1288: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:16.663 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:17] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:25:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:17.339Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:25:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:25:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:17.773 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:17 np0005634532 nova_compute[257049]: 2026-03-01 10:25:17.888 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:25:18
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', '.nfs', 'backups', 'default.rgw.control', 'vms', 'images', '.rgw.root', 'cephfs.cephfs.data', 'volumes', '.mgr']
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:25:18 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1289: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:18.665 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:18.894Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [devicehealth INFO root] Check health
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:25:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:25:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:19.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:20 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1290: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:25:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:20.667 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:25:20 np0005634532 nova_compute[257049]: 2026-03-01 10:25:20.721 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:21.779 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:22 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1291: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:22.669 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:22 np0005634532 nova_compute[257049]: 2026-03-01 10:25:22.925 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:23.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:25:23.900 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:25:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:25:23.900 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:25:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:25:23.900 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:25:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:23 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:24 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1292: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:24.671 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:25 np0005634532 nova_compute[257049]: 2026-03-01 10:25:25.724 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:25.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:26 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1293: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:26.673 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:25:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:27] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:25:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:27.339Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:25:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:27.339Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:27.340Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:27.788 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:27 np0005634532 nova_compute[257049]: 2026-03-01 10:25:27.928 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:28 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1294: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:28.675 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:28.896Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:29.792 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:30 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1295: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:30.677 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:30 np0005634532 nova_compute[257049]: 2026-03-01 10:25:30.728 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:31 np0005634532 podman[293300]: 2026-03-01 10:25:31.389612048 +0000 UTC m=+0.075875141 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0)
Mar  1 05:25:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:31.795 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:32 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1296: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:25:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:25:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:32.679 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:32 np0005634532 nova_compute[257049]: 2026-03-01 10:25:32.930 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:33.797 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:34 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1297: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:34.681 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:35 np0005634532 nova_compute[257049]: 2026-03-01 10:25:35.731 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:35.801 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:36 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1298: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:36.683 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:25:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:37] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:25:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:37.341Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:37.343Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:25:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:37.344Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:37 np0005634532 podman[293333]: 2026-03-01 10:25:37.381666621 +0000 UTC m=+0.071164254 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20260223, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Mar  1 05:25:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:37.804 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:37 np0005634532 nova_compute[257049]: 2026-03-01 10:25:37.970 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2.devices.0}] v 0)
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-2}] v 0)
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:38 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1299: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1.devices.0}] v 0)
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-1}] v 0)
Mar  1 05:25:38 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=cleanup t=2026-03-01T10:25:38.561688762Z level=info msg="Completed cleanup jobs" duration=2.392549ms
Mar  1 05:25:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=plugins.update.checker t=2026-03-01T10:25:38.664800972Z level=info msg="Update check succeeded" duration=49.080219ms
Mar  1 05:25:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-grafana-compute-0[106394]: logger=grafana.update.checker t=2026-03-01T10:25:38.674780398Z level=info msg="Update check succeeded" duration=45.754947ms
Mar  1 05:25:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:38.685 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:38.897Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1300: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 847 B/s rd, 0 op/s
Mar  1 05:25:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1301: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 0 op/s
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:39 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.766234477 +0000 UTC m=+0.058989604 container create 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_REF=squid, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:25:39 np0005634532 systemd[1]: Started libpod-conmon-4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411.scope.
Mar  1 05:25:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:25:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:39.806 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:25:39 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.835735419 +0000 UTC m=+0.128490546 container init 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.74646744 +0000 UTC m=+0.039222617 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.84267038 +0000 UTC m=+0.135425517 container start 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.846594887 +0000 UTC m=+0.139350024 container attach 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, CEPH_REF=squid, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:25:39 np0005634532 silly_noether[293542]: 167 167
Mar  1 05:25:39 np0005634532 systemd[1]: libpod-4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411.scope: Deactivated successfully.
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.849518959 +0000 UTC m=+0.142274086 container died 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:25:39 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c84c7bed14e3cad3a265a6e6787c0338eedcfae394e3721b4e5d331ba9badf78-merged.mount: Deactivated successfully.
Mar  1 05:25:39 np0005634532 podman[293526]: 2026-03-01 10:25:39.885447143 +0000 UTC m=+0.178202270 container remove 4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=silly_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:25:39 np0005634532 systemd[1]: libpod-conmon-4dafbbeaf602c41402edfe4be07ea0230914e1cbeb1b5a03144c5aa6a843c411.scope: Deactivated successfully.
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.027496522 +0000 UTC m=+0.044119198 container create 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:25:40 np0005634532 systemd[1]: Started libpod-conmon-083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714.scope.
Mar  1 05:25:40 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:40 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.099536937 +0000 UTC m=+0.116159603 container init 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, CEPH_REF=squid, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.008010012 +0000 UTC m=+0.024632668 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.108198111 +0000 UTC m=+0.124820747 container start 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.111310267 +0000 UTC m=+0.127932913 container attach 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:25:40 np0005634532 loving_keller[293583]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:25:40 np0005634532 loving_keller[293583]: --> All data devices are unavailable
Mar  1 05:25:40 np0005634532 systemd[1]: libpod-083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714.scope: Deactivated successfully.
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.39232669 +0000 UTC m=+0.408949366 container died 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:25:40 np0005634532 systemd[1]: var-lib-containers-storage-overlay-f02dbaa555ce6ec2a7964aeb2eb3bbde8eaa495bd1fba0038b5b67378b293071-merged.mount: Deactivated successfully.
Mar  1 05:25:40 np0005634532 podman[293567]: 2026-03-01 10:25:40.446194287 +0000 UTC m=+0.462816963 container remove 083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=loving_keller, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Mar  1 05:25:40 np0005634532 systemd[1]: libpod-conmon-083eb30dd16cf37984c5adcc3756ecccab7bfdbef746d9cf0c6d4e04e5cdc714.scope: Deactivated successfully.
Mar  1 05:25:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:40.687 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:40 np0005634532 nova_compute[257049]: 2026-03-01 10:25:40.735 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.020684456 +0000 UTC m=+0.048408997 container create 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:41 np0005634532 systemd[1]: Started libpod-conmon-7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81.scope.
Mar  1 05:25:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.005113768 +0000 UTC m=+0.032838339 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.0983428 +0000 UTC m=+0.126067361 container init 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.107374415 +0000 UTC m=+0.135098966 container start 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.110947344 +0000 UTC m=+0.138671975 container attach 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:41 np0005634532 exciting_kirch[293748]: 167 167
Mar  1 05:25:41 np0005634532 systemd[1]: libpod-7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81.scope: Deactivated successfully.
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.115526968 +0000 UTC m=+0.143251519 container died 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:25:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-7e84d4f5f6c154b8624e1592ec31c1dd190951acdded908dc798ed26b79b395b-merged.mount: Deactivated successfully.
Mar  1 05:25:41 np0005634532 podman[293707]: 2026-03-01 10:25:41.150742765 +0000 UTC m=+0.178467326 container remove 7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=exciting_kirch, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:41 np0005634532 systemd[1]: libpod-conmon-7d2864fc8a39c3f07748fcef972741165765297225d35fdbfdd46916b8245d81.scope: Deactivated successfully.
Mar  1 05:25:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1302: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 0 op/s
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.31481284 +0000 UTC m=+0.052766915 container create 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:25:41 np0005634532 systemd[1]: Started libpod-conmon-30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4.scope.
Mar  1 05:25:41 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7f503ee0c28e4ee85dcaab5504f08cd3b5ff780b7ded9d7dd0c45f9ad0c590/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7f503ee0c28e4ee85dcaab5504f08cd3b5ff780b7ded9d7dd0c45f9ad0c590/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7f503ee0c28e4ee85dcaab5504f08cd3b5ff780b7ded9d7dd0c45f9ad0c590/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:41 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a7f503ee0c28e4ee85dcaab5504f08cd3b5ff780b7ded9d7dd0c45f9ad0c590/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.293814077 +0000 UTC m=+0.031768172 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.393965841 +0000 UTC m=+0.131919886 container init 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.398867033 +0000 UTC m=+0.136821088 container start 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.402042042 +0000 UTC m=+0.139996087 container attach 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:25:41 np0005634532 objective_johnson[293790]: {
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:    "0": [
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:        {
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "devices": [
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "/dev/loop3"
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            ],
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "lv_name": "ceph_lv0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "lv_size": "21470642176",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "name": "ceph_lv0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "tags": {
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.cluster_name": "ceph",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.crush_device_class": "",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.encrypted": "0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.osd_id": "0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.type": "block",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.vdo": "0",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:                "ceph.with_tpm": "0"
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            },
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "type": "block",
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:            "vg_name": "ceph_vg0"
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:        }
Mar  1 05:25:41 np0005634532 objective_johnson[293790]:    ]
Mar  1 05:25:41 np0005634532 objective_johnson[293790]: }
Mar  1 05:25:41 np0005634532 systemd[1]: libpod-30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4.scope: Deactivated successfully.
Mar  1 05:25:41 np0005634532 conmon[293790]: conmon 30d86c5c026501b0ee11 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4.scope/container/memory.events
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.680584639 +0000 UTC m=+0.418538684 container died 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:25:41 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1a7f503ee0c28e4ee85dcaab5504f08cd3b5ff780b7ded9d7dd0c45f9ad0c590-merged.mount: Deactivated successfully.
Mar  1 05:25:41 np0005634532 podman[293773]: 2026-03-01 10:25:41.723321463 +0000 UTC m=+0.461275508 container remove 30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=objective_johnson, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Mar  1 05:25:41 np0005634532 systemd[1]: libpod-conmon-30d86c5c026501b0ee110f935db940bfa9b4d302d18cc435389da512e573b1a4.scope: Deactivated successfully.
Mar  1 05:25:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:41.810 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.204791362 +0000 UTC m=+0.044456968 container create 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:25:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:42 np0005634532 systemd[1]: Started libpod-conmon-0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2.scope.
Mar  1 05:25:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.188292191 +0000 UTC m=+0.027957777 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.284636701 +0000 UTC m=+0.124302317 container init 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.28984435 +0000 UTC m=+0.129509936 container start 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.293263105 +0000 UTC m=+0.132928731 container attach 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, ceph=True, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.schema-version=1.0)
Mar  1 05:25:42 np0005634532 strange_hawking[293922]: 167 167
Mar  1 05:25:42 np0005634532 systemd[1]: libpod-0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2.scope: Deactivated successfully.
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.295284656 +0000 UTC m=+0.134950262 container died 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:25:42 np0005634532 systemd[1]: var-lib-containers-storage-overlay-b790c6da29207c72bf8a10a938b29c10355b8e41f147e90d84fecdb47cafb4ec-merged.mount: Deactivated successfully.
Mar  1 05:25:42 np0005634532 podman[293905]: 2026-03-01 10:25:42.329820936 +0000 UTC m=+0.169486522 container remove 0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=strange_hawking, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:25:42 np0005634532 systemd[1]: libpod-conmon-0e67eb0818eed7463a215aeccea24c54b6bbdb7457ce849299eeb7a86a7b02f2.scope: Deactivated successfully.
Mar  1 05:25:42 np0005634532 podman[293946]: 2026-03-01 10:25:42.471061263 +0000 UTC m=+0.040095020 container create 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:25:42 np0005634532 systemd[1]: Started libpod-conmon-9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f.scope.
Mar  1 05:25:42 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:25:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833613abf790dd2d34e69256efa9824c804e71dc2f0be04f8674162806d3d70/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833613abf790dd2d34e69256efa9824c804e71dc2f0be04f8674162806d3d70/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833613abf790dd2d34e69256efa9824c804e71dc2f0be04f8674162806d3d70/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:42 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1833613abf790dd2d34e69256efa9824c804e71dc2f0be04f8674162806d3d70/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:25:42 np0005634532 podman[293946]: 2026-03-01 10:25:42.454533061 +0000 UTC m=+0.023566818 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:25:42 np0005634532 podman[293946]: 2026-03-01 10:25:42.558243114 +0000 UTC m=+0.127276951 container init 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:25:42 np0005634532 podman[293946]: 2026-03-01 10:25:42.566649363 +0000 UTC m=+0.135683160 container start 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:25:42 np0005634532 podman[293946]: 2026-03-01 10:25:42.570438108 +0000 UTC m=+0.139471865 container attach 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Mar  1 05:25:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:42.690 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:42 np0005634532 nova_compute[257049]: 2026-03-01 10:25:42.971 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:43 np0005634532 lvm[294036]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:25:43 np0005634532 lvm[294036]: VG ceph_vg0 finished
Mar  1 05:25:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1303: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 346 B/s rd, 0 op/s
Mar  1 05:25:43 np0005634532 brave_moore[293962]: {}
Mar  1 05:25:43 np0005634532 systemd[1]: libpod-9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f.scope: Deactivated successfully.
Mar  1 05:25:43 np0005634532 podman[293946]: 2026-03-01 10:25:43.283262158 +0000 UTC m=+0.852295925 container died 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, CEPH_REF=squid)
Mar  1 05:25:43 np0005634532 systemd[1]: var-lib-containers-storage-overlay-1833613abf790dd2d34e69256efa9824c804e71dc2f0be04f8674162806d3d70-merged.mount: Deactivated successfully.
Mar  1 05:25:43 np0005634532 podman[293946]: 2026-03-01 10:25:43.323746877 +0000 UTC m=+0.892780644 container remove 9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=brave_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:25:43 np0005634532 systemd[1]: libpod-conmon-9e9b2cf05cf0495b6941669c082ce147c3c000cae7f52dba08980400a4fc733f.scope: Deactivated successfully.
Mar  1 05:25:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:25:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:43 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:25:43 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:43.813 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:44.691 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:44 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:25:45 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1304: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 0 op/s
Mar  1 05:25:45 np0005634532 nova_compute[257049]: 2026-03-01 10:25:45.740 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:45.817 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:46.693 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:47] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:25:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1305: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 692 B/s rd, 0 op/s
Mar  1 05:25:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:47.345Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:25:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:25:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:47.820 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:25:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:25:47 np0005634532 nova_compute[257049]: 2026-03-01 10:25:47.974 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:48.695 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:48.898Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:49 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1306: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 614 B/s rd, 0 op/s
Mar  1 05:25:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:25:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:49.822 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:25:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:50.697 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:50 np0005634532 nova_compute[257049]: 2026-03-01 10:25:50.742 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:51 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1307: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:51.826 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:51 np0005634532 nova_compute[257049]: 2026-03-01 10:25:51.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:52.699 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:52 np0005634532 nova_compute[257049]: 2026-03-01 10:25:52.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:53 np0005634532 nova_compute[257049]: 2026-03-01 10:25:53.015 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:53 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1308: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:53.829 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:53 np0005634532 nova_compute[257049]: 2026-03-01 10:25:53.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:54.701 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:55 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1309: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:55 np0005634532 nova_compute[257049]: 2026-03-01 10:25:55.773 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:55.832 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:55 np0005634532 nova_compute[257049]: 2026-03-01 10:25:55.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:55 np0005634532 nova_compute[257049]: 2026-03-01 10:25:55.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:25:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:56.703 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:25:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:25:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:25:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:25:57 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1310: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:25:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:57.346Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:25:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:25:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:57.835 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.994 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.994 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.994 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.995 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:25:57 np0005634532 nova_compute[257049]: 2026-03-01 10:25:57.995 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.016 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"df", "format":"json"} v 0)
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456335382' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0)
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1456335382' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:25:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/878493947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.413 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.419s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.587 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.588 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4465MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.588 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.589 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.691 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.692 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:25:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:25:58.706 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:25:58 np0005634532 nova_compute[257049]: 2026-03-01 10:25:58.737 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:25:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:58.899Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:25:58.900Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:25:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:25:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:25:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:25:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:25:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:25:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:25:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1090346816' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.225 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.230 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:25:59 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1311: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.246 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.247 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.248 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:25:59 np0005634532 nova_compute[257049]: 2026-03-01 10:25:59.248 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:25:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:25:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:25:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:25:59.838 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:00.708 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:00 np0005634532 nova_compute[257049]: 2026-03-01 10:26:00.776 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:01 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1312: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:01.841 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.253 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.253 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.254 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:02 np0005634532 podman[294167]: 2026-03-01 10:26:02.402763192 +0000 UTC m=+0.089605973 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Mar  1 05:26:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:26:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:26:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:02.710 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:26:02 np0005634532 nova_compute[257049]: 2026-03-01 10:26:02.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:26:03 np0005634532 nova_compute[257049]: 2026-03-01 10:26:03.000 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:26:03 np0005634532 nova_compute[257049]: 2026-03-01 10:26:03.000 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:03 np0005634532 nova_compute[257049]: 2026-03-01 10:26:03.001 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Mar  1 05:26:03 np0005634532 nova_compute[257049]: 2026-03-01 10:26:03.026 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Mar  1 05:26:03 np0005634532 nova_compute[257049]: 2026-03-01 10:26:03.067 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:03 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1313: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:03.844 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:04.712 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:04 np0005634532 nova_compute[257049]: 2026-03-01 10:26:04.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:04 np0005634532 nova_compute[257049]: 2026-03-01 10:26:04.978 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Mar  1 05:26:05 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1314: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:05 np0005634532 nova_compute[257049]: 2026-03-01 10:26:05.779 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000024s ======
Mar  1 05:26:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:05.847 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000024s
Mar  1 05:26:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:06.714 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:26:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:07] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:26:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:07 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1315: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:07.347Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:07.851 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:08 np0005634532 nova_compute[257049]: 2026-03-01 10:26:08.089 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:08 np0005634532 podman[294201]: 2026-03-01 10:26:08.387153822 +0000 UTC m=+0.072519627 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20260223)
Mar  1 05:26:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:08.716 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:26:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:08.901Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:26:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:08.901Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 3 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:09 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1316: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:09.855 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:10.718 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:10 np0005634532 nova_compute[257049]: 2026-03-01 10:26:10.782 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:11 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1317: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:11.858 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:12.720 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:13 np0005634532 nova_compute[257049]: 2026-03-01 10:26:13.091 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:13 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1318: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:13.860 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:14.722 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:15 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1319: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #78. Immutable memtables: 0.
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.739927) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:856] [default] [JOB 43] Flushing memtable with next log file: 78
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775739965, "job": 43, "event": "flush_started", "num_memtables": 1, "num_entries": 1536, "num_deletes": 251, "total_data_size": 2915642, "memory_usage": 2963840, "flush_reason": "Manual Compaction"}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:885] [default] [JOB 43] Level-0 flush table #79: started
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775753777, "cf_name": "default", "job": 43, "event": "table_file_creation", "file_number": 79, "file_size": 2831354, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 35424, "largest_seqno": 36958, "table_properties": {"data_size": 2824186, "index_size": 4175, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 15129, "raw_average_key_size": 20, "raw_value_size": 2809810, "raw_average_value_size": 3766, "num_data_blocks": 180, "num_entries": 746, "num_filter_entries": 746, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772360632, "oldest_key_time": 1772360632, "file_creation_time": 1772360775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 79, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 43] Flush lasted 13911 microseconds, and 4098 cpu microseconds.
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.753832) [db/flush_job.cc:967] [default] [JOB 43] Level-0 flush table #79: 2831354 bytes OK
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.753855) [db/memtable_list.cc:519] [default] Level-0 commit table #79 started
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.755065) [db/memtable_list.cc:722] [default] Level-0 commit table #79: memtable #1 done
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.755082) EVENT_LOG_v1 {"time_micros": 1772360775755077, "job": 43, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.755100) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 43] Try to delete WAL files size 2909139, prev total WAL file size 2909139, number of live WAL files 2.
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000075.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.755666) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033303132' seq:72057594037927935, type:22 .. '7061786F730033323634' seq:0, type:0; will stop at (end)
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 44] Compacting 1@0 + 1@6 files to L6, score -1.00
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 43 Base level 0, inputs: [79(2764KB)], [77(11MB)]
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775755758, "job": 44, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [79], "files_L6": [77], "score": -1, "input_data_size": 15075659, "oldest_snapshot_seqno": -1}
Mar  1 05:26:15 np0005634532 nova_compute[257049]: 2026-03-01 10:26:15.786 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 44] Generated table #80: 6686 keys, 12799812 bytes, temperature: kUnknown
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775810082, "cf_name": "default", "job": 44, "event": "table_file_creation", "file_number": 80, "file_size": 12799812, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 12758486, "index_size": 23486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16773, "raw_key_size": 175810, "raw_average_key_size": 26, "raw_value_size": 12641122, "raw_average_value_size": 1890, "num_data_blocks": 918, "num_entries": 6686, "num_filter_entries": 6686, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1772358052, "oldest_key_time": 0, "file_creation_time": 1772360775, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "d85a3bc5-3dc5-432f-9fab-fa926ce32d3d", "db_session_id": "FJWJGIYC2V5ZEQGX709M", "orig_file_number": 80, "seqno_to_time_mapping": "N/A"}}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.810368) [db/compaction/compaction_job.cc:1663] [default] [JOB 44] Compacted 1@0 + 1@6 files to L6 => 12799812 bytes
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.811927) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 277.2 rd, 235.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.7, 11.7 +0.0 blob) out(12.2 +0.0 blob), read-write-amplify(9.8) write-amplify(4.5) OK, records in: 7206, records dropped: 520 output_compression: NoCompression
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.811957) EVENT_LOG_v1 {"time_micros": 1772360775811943, "job": 44, "event": "compaction_finished", "compaction_time_micros": 54392, "compaction_time_cpu_micros": 30448, "output_level": 6, "num_output_files": 1, "total_output_size": 12799812, "num_input_records": 7206, "num_output_records": 6686, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000079.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775812536, "job": 44, "event": "table_file_deletion", "file_number": 79}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000077.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: EVENT_LOG_v1 {"time_micros": 1772360775814468, "job": 44, "event": "table_file_deletion", "file_number": 77}
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.755508) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.814569) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.814575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.814576) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.814578) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 ceph-mon[75825]: rocksdb: (Original Log Time 2026/03/01-10:26:15.814579) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Mar  1 05:26:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:15.864 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:16 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:16 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:16 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:16.724 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:17] "GET /metrics HTTP/1.1" 200 48455 "" "Prometheus/2.51.0"
Mar  1 05:26:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1320: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:17.348Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:26:17 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:17.349Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:26:17 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:26:17 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:17 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:17 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:17 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:17.867 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:17 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:18 np0005634532 nova_compute[257049]: 2026-03-01 10:26:18.095 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] _maybe_adjust
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.000665858301588852 of space, bias 1.0, pg target 0.19975749047665559 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 3.8154424692322717e-07 of space, bias 1.0, pg target 0.00011446327407696816 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [pg_autoscaler INFO root] Pool '.nfs' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Optimize plan auto_2026-03-01_10:26:18
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] do_upmap
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] pools ['.rgw.root', '.mgr', '.nfs', 'default.rgw.control', 'vms', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups']
Mar  1 05:26:18 np0005634532 ceph-mgr[76134]: [balancer INFO root] prepared 0/10 upmap changes
Mar  1 05:26:18 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:18 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:18 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:18.726 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:18 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:18.903Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:18 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:19 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:19 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1321: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: vms, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: volumes, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: backups, start_after=
Mar  1 05:26:19 np0005634532 ceph-mgr[76134]: [rbd_support INFO root] load_schedules: images, start_after=
Mar  1 05:26:19 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:19 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:19 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:19.869 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:20 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:20 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:20 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:20.728 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:20 np0005634532 nova_compute[257049]: 2026-03-01 10:26:20.824 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:21 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1322: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:21 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:21 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:21 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:21.872 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:22 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:22 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:22 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:22 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:22.730 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:23 np0005634532 nova_compute[257049]: 2026-03-01 10:26:23.127 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:23 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1323: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:23 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:23 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:23 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:23.876 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:26:23.900 167541 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:26:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:26:23.901 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:26:23 np0005634532 ovn_metadata_agent[167536]: 2026-03-01 10:26:23.901 167541 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:26:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:24 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:24 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:24 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:24 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:24 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:24.732 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:25 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1324: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:25 np0005634532 nova_compute[257049]: 2026-03-01 10:26:25.866 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:25 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:25 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:25 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:25.877 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:26 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:26 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:26 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:26.734 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:26:27 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:27] "GET /metrics HTTP/1.1" 200 48456 "" "Prometheus/2.51.0"
Mar  1 05:26:27 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:27 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1325: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:27 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:27.350Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:27 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:27 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:27 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:27.880 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:28 np0005634532 nova_compute[257049]: 2026-03-01 10:26:28.127 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:28 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:28 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:28 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:28.737 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:28 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:28.904Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:28 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:29 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:29 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:29 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1326: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:29 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:29 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:29 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:29.883 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:30 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:30 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:30 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:30.739 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:30 np0005634532 nova_compute[257049]: 2026-03-01 10:26:30.869 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:31 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1327: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:31 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:31 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:31 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:31.885 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:32 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:26:32 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:26:32 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:32 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:32 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:32.741 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:33 np0005634532 nova_compute[257049]: 2026-03-01 10:26:33.178 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:33 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1328: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:33 np0005634532 podman[294269]: 2026-03-01 10:26:33.387297364 +0000 UTC m=+0.081760908 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Mar  1 05:26:33 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:33 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:33 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:33.889 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:33 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:34 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:34 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:34 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:34 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:34 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:34.743 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:35 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1329: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:35 np0005634532 nova_compute[257049]: 2026-03-01 10:26:35.872 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:35 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:35 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:35 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:35.893 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:36 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:36 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:36 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:36.745 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:26:37 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:37] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:26:37 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:37 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1330: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:37 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:37.350Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:37 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:37 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:37 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:37.895 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:38 np0005634532 nova_compute[257049]: 2026-03-01 10:26:38.180 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:38 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:38 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:38 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:38.747 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:38.905Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:26:38 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:38.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 3 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:38 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:39 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:39 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:39 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1331: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:26:39 np0005634532 podman[294301]: 2026-03-01 10:26:39.386527103 +0000 UTC m=+0.068755483 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20260223, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.43.0, org.label-schema.schema-version=1.0, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Mar  1 05:26:39 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:39 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:39 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:39.897 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:40 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:40 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:40 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:40.749 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:40 np0005634532 nova_compute[257049]: 2026-03-01 10:26:40.875 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:41 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1332: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:41 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:41 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:41 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:41.900 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:42 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:42 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:42 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:42 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:42.751 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:43 np0005634532 nova_compute[257049]: 2026-03-01 10:26:43.216 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:43 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1333: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:43 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:43 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:43 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:43.903 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:43 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:44 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:44 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0)
Mar  1 05:26:44 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1334: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 821 B/s rd, 0 op/s
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.nfs.cephfs}] v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:26:44 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:26:44 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:44 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:44 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:44.753 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.112917988 +0000 UTC m=+0.053377580 container create 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:26:45 np0005634532 systemd[1]: Started libpod-conmon-0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a.scope.
Mar  1 05:26:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Mar  1 05:26:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:45 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Mar  1 05:26:45 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.092443539 +0000 UTC m=+0.032903111 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.197680069 +0000 UTC m=+0.138139721 container init 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.204033717 +0000 UTC m=+0.144493279 container start 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.207477113 +0000 UTC m=+0.147936775 container attach 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250325, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Mar  1 05:26:45 np0005634532 festive_chatterjee[294539]: 167 167
Mar  1 05:26:45 np0005634532 systemd[1]: libpod-0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a.scope: Deactivated successfully.
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.209392971 +0000 UTC m=+0.149852543 container died 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Mar  1 05:26:45 np0005634532 systemd[1]: var-lib-containers-storage-overlay-3f2ab2589d92201c21a9e8ee5dcd4ca1ea73282af69f45dc731c693144ec96b3-merged.mount: Deactivated successfully.
Mar  1 05:26:45 np0005634532 podman[294522]: 2026-03-01 10:26:45.254506594 +0000 UTC m=+0.194966146 container remove 0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=festive_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Mar  1 05:26:45 np0005634532 systemd[1]: libpod-conmon-0890703bcb11a6cb7c6591067423f424df07c6632fdd98bfa2f8fdcd3b84200a.scope: Deactivated successfully.
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.451539311 +0000 UTC m=+0.061624996 container create 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Mar  1 05:26:45 np0005634532 systemd[1]: Started libpod-conmon-549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b.scope.
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.426340003 +0000 UTC m=+0.036425758 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:45 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:45 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.556586087 +0000 UTC m=+0.166671792 container init 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, CEPH_REF=squid, OSD_FLAVOR=default)
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.563867558 +0000 UTC m=+0.173953213 container start 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, org.label-schema.build-date=20250325, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.567313164 +0000 UTC m=+0.177398899 container attach 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250325)
Mar  1 05:26:45 np0005634532 nova_compute[257049]: 2026-03-01 10:26:45.877 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:45 np0005634532 affectionate_morse[294579]: --> passed data devices: 0 physical, 1 LVM
Mar  1 05:26:45 np0005634532 affectionate_morse[294579]: --> All data devices are unavailable
Mar  1 05:26:45 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:45 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:45 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:45.906 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:45 np0005634532 systemd[1]: libpod-549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b.scope: Deactivated successfully.
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.914088669 +0000 UTC m=+0.524174334 container died 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Mar  1 05:26:45 np0005634532 systemd[1]: var-lib-containers-storage-overlay-9d84d10422eb2ee8a0256ff1eaf87478ac7e9d1571f90127e0ff19f89a9710a2-merged.mount: Deactivated successfully.
Mar  1 05:26:45 np0005634532 podman[294562]: 2026-03-01 10:26:45.956853684 +0000 UTC m=+0.566939379 container remove 549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=affectionate_morse, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=squid, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.40.1)
Mar  1 05:26:45 np0005634532 systemd[1]: libpod-conmon-549e7ddd8dfb92a3dd593adecb0e8229a9c25422a6a92595f275c5a684dc113b.scope: Deactivated successfully.
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.470671159 +0000 UTC m=+0.035831893 container create 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True)
Mar  1 05:26:46 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1335: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 0 op/s
Mar  1 05:26:46 np0005634532 systemd[1]: Started libpod-conmon-7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f.scope.
Mar  1 05:26:46 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.454588548 +0000 UTC m=+0.019749312 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.561272885 +0000 UTC m=+0.126433629 container init 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=squid, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325)
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.571072599 +0000 UTC m=+0.136233363 container start 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.575086899 +0000 UTC m=+0.140247663 container attach 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:26:46 np0005634532 awesome_brahmagupta[294722]: 167 167
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.577979421 +0000 UTC m=+0.143140175 container died 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, CEPH_REF=squid, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:26:46 np0005634532 systemd[1]: libpod-7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f.scope: Deactivated successfully.
Mar  1 05:26:46 np0005634532 systemd[1]: var-lib-containers-storage-overlay-c1c06a7d6699dfedd23527ddc4d2e7c318485172d57c97854b01b448446d3ecc-merged.mount: Deactivated successfully.
Mar  1 05:26:46 np0005634532 podman[294706]: 2026-03-01 10:26:46.622544021 +0000 UTC m=+0.187704775 container remove 7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=awesome_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=squid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Mar  1 05:26:46 np0005634532 systemd[1]: libpod-conmon-7fb325f203533f6b3a9ca21d34ce91c6064f00f53d5db1caa92d2f9bd3ba802f.scope: Deactivated successfully.
Mar  1 05:26:46 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:46 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:46 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:46.755 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:46 np0005634532 podman[294745]: 2026-03-01 10:26:46.797793305 +0000 UTC m=+0.046694904 container create 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Mar  1 05:26:46 np0005634532 systemd[1]: Started libpod-conmon-33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6.scope.
Mar  1 05:26:46 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:46 np0005634532 podman[294745]: 2026-03-01 10:26:46.774733561 +0000 UTC m=+0.023635170 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba6119ccf5f196dd2e3c61120df8ef931eb6427af8841ec241a5327d1b5ffc3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba6119ccf5f196dd2e3c61120df8ef931eb6427af8841ec241a5327d1b5ffc3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba6119ccf5f196dd2e3c61120df8ef931eb6427af8841ec241a5327d1b5ffc3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:46 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aba6119ccf5f196dd2e3c61120df8ef931eb6427af8841ec241a5327d1b5ffc3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:46 np0005634532 podman[294745]: 2026-03-01 10:26:46.896329859 +0000 UTC m=+0.145231508 container init 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, org.label-schema.schema-version=1.0, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, ceph=True)
Mar  1 05:26:46 np0005634532 podman[294745]: 2026-03-01 10:26:46.902160994 +0000 UTC m=+0.151062573 container start 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=squid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.40.1, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:26:46 np0005634532 podman[294745]: 2026-03-01 10:26:46.905385124 +0000 UTC m=+0.154286773 container attach 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, ceph=True, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:26:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:47] "GET /metrics HTTP/1.1" 200 48457 "" "Prometheus/2.51.0"
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]: {
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:    "0": [
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:        {
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "devices": [
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "/dev/loop3"
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            ],
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "lv_name": "ceph_lv0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "lv_size": "21470642176",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=437b1e74-f995-5d64-af1d-257ce01d77ab,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=e5da778e-73b7-4ea1-8a91-750fe3f6aa68,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0,ceph.with_tpm=0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "lv_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "name": "ceph_lv0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "path": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "tags": {
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.block_uuid": "0RN1y3-WDwf-vmRn-3Uec-9cdU-WGcX-8Z6LEG",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.cephx_lockbox_secret": "",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.cluster_fsid": "437b1e74-f995-5d64-af1d-257ce01d77ab",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.cluster_name": "ceph",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.crush_device_class": "",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.encrypted": "0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.osd_fsid": "e5da778e-73b7-4ea1-8a91-750fe3f6aa68",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.osd_id": "0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.osdspec_affinity": "default_drive_group",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.type": "block",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.vdo": "0",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:                "ceph.with_tpm": "0"
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            },
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "type": "block",
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:            "vg_name": "ceph_vg0"
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:        }
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]:    ]
Mar  1 05:26:47 np0005634532 sweet_lamarr[294763]: }
Mar  1 05:26:47 np0005634532 systemd[1]: libpod-33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6.scope: Deactivated successfully.
Mar  1 05:26:47 np0005634532 podman[294745]: 2026-03-01 10:26:47.198297218 +0000 UTC m=+0.447198807 container died 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, io.buildah.version=1.40.1, org.label-schema.build-date=20250325, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Mar  1 05:26:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:47 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:47.351Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:47 np0005634532 systemd[1]: var-lib-containers-storage-overlay-aba6119ccf5f196dd2e3c61120df8ef931eb6427af8841ec241a5327d1b5ffc3-merged.mount: Deactivated successfully.
Mar  1 05:26:47 np0005634532 podman[294745]: 2026-03-01 10:26:47.448612292 +0000 UTC m=+0.697513861 container remove 33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_lamarr, CEPH_REF=squid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Mar  1 05:26:47 np0005634532 systemd[1]: libpod-conmon-33037738336e756b3c9911ecbb0d169fb0b6c9ae129863be3d2541d689ced7e6.scope: Deactivated successfully.
Mar  1 05:26:47 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:26:47 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] scanning for idle connections..
Mar  1 05:26:47 np0005634532 ceph-mgr[76134]: [volumes INFO mgr_util] cleaning up connections: []
Mar  1 05:26:47 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:47 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:47 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:47.908 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:47 np0005634532 podman[294875]: 2026-03-01 10:26:47.970114187 +0000 UTC m=+0.034963072 container create c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.40.1, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=squid, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:26:48 np0005634532 systemd[1]: Started libpod-conmon-c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567.scope.
Mar  1 05:26:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:48.038567362 +0000 UTC m=+0.103416267 container init c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:48.042255683 +0000 UTC m=+0.107104568 container start c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, io.buildah.version=1.40.1, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:26:48 np0005634532 amazing_brown[294891]: 167 167
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:48.045173516 +0000 UTC m=+0.110022401 container attach c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=squid)
Mar  1 05:26:48 np0005634532 systemd[1]: libpod-c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567.scope: Deactivated successfully.
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:48.04613219 +0000 UTC m=+0.110981075 container died c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.build-date=20250325, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.40.1, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, FROM_IMAGE=quay.io/centos/centos:stream9)
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:47.956738894 +0000 UTC m=+0.021587799 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:48 np0005634532 systemd[1]: var-lib-containers-storage-overlay-8bd81a4ace96fc6672254de4100549193cfab6e416ff253d6cd435b0114dd7e7-merged.mount: Deactivated successfully.
Mar  1 05:26:48 np0005634532 podman[294875]: 2026-03-01 10:26:48.075229344 +0000 UTC m=+0.140078219 container remove c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567 (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=amazing_brown, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.40.1, CEPH_REF=squid, org.label-schema.vendor=CentOS, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62)
Mar  1 05:26:48 np0005634532 systemd[1]: libpod-conmon-c3532cc85d43d4a02aa9e574073e4661edf63355f513d9fc94da236354a5f567.scope: Deactivated successfully.
Mar  1 05:26:48 np0005634532 nova_compute[257049]: 2026-03-01 10:26:48.217 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:48 np0005634532 podman[294915]: 2026-03-01 10:26:48.230438279 +0000 UTC m=+0.055156754 container create e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=squid, io.buildah.version=1.40.1, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250325, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Mar  1 05:26:48 np0005634532 systemd[1]: Started libpod-conmon-e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb.scope.
Mar  1 05:26:48 np0005634532 systemd[1]: Started libcrun container.
Mar  1 05:26:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20289cfec52abc394482a6d6ec3b989297b3133be51ca921d2b110423fa2feda/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20289cfec52abc394482a6d6ec3b989297b3133be51ca921d2b110423fa2feda/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20289cfec52abc394482a6d6ec3b989297b3133be51ca921d2b110423fa2feda/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:48 np0005634532 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20289cfec52abc394482a6d6ec3b989297b3133be51ca921d2b110423fa2feda/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Mar  1 05:26:48 np0005634532 podman[294915]: 2026-03-01 10:26:48.299692094 +0000 UTC m=+0.124410599 container init e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid)
Mar  1 05:26:48 np0005634532 podman[294915]: 2026-03-01 10:26:48.206843372 +0000 UTC m=+0.031561947 image pull aade1b12b8e6196a39b8c83a7f707419487931732368729477a8c2bbcbca1d7c quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec
Mar  1 05:26:48 np0005634532 podman[294915]: 2026-03-01 10:26:48.30717721 +0000 UTC m=+0.131895695 container start e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Mar  1 05:26:48 np0005634532 podman[294915]: 2026-03-01 10:26:48.310718109 +0000 UTC m=+0.135436684 container attach e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.40.1, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=squid, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250325, ceph=True)
Mar  1 05:26:48 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1336: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 0 op/s
Mar  1 05:26:48 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:48 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:48 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:48.757 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:48 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:48.906Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:48 np0005634532 lvm[295006]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:26:48 np0005634532 lvm[295006]: VG ceph_vg0 finished
Mar  1 05:26:48 np0005634532 sweet_poitras[294932]: {}
Mar  1 05:26:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:48 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:49 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:49 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:49 np0005634532 systemd[1]: libpod-e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb.scope: Deactivated successfully.
Mar  1 05:26:49 np0005634532 podman[294915]: 2026-03-01 10:26:49.02639992 +0000 UTC m=+0.851118405 container died e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, CEPH_REF=squid, io.buildah.version=1.40.1, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250325, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Mar  1 05:26:49 np0005634532 systemd[1]: libpod-e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb.scope: Consumed 1.016s CPU time.
Mar  1 05:26:49 np0005634532 systemd[1]: var-lib-containers-storage-overlay-20289cfec52abc394482a6d6ec3b989297b3133be51ca921d2b110423fa2feda-merged.mount: Deactivated successfully.
Mar  1 05:26:49 np0005634532 podman[294915]: 2026-03-01 10:26:49.074733144 +0000 UTC m=+0.899451639 container remove e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb (image=quay.io/ceph/ceph@sha256:7c69e59beaeea61ca714e71cb84ff6d5e533db7f1fd84143dd9ba6649a5fd2ec, name=sweet_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=squid, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=c92aebb279828e9c3c1f5d24613efca272649e62, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250325, org.label-schema.vendor=CentOS, io.buildah.version=1.40.1)
Mar  1 05:26:49 np0005634532 systemd[1]: libpod-conmon-e10559f5120c1aec3d5d095760abe499c7645e99dd24e5fa712175143e6208eb.scope: Deactivated successfully.
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0)
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0)
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: log_channel(audit) log [INF] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:49 np0005634532 ceph-mon[75825]: from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' 
Mar  1 05:26:49 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:49 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:49 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:49.911 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:50 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1337: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 0 op/s
Mar  1 05:26:50 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:50 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:50 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:50.759 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:50 np0005634532 nova_compute[257049]: 2026-03-01 10:26:50.881 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:51 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:51 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:51 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:51.914 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:52 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:52 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1338: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 547 B/s rd, 0 op/s
Mar  1 05:26:52 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:52 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:52 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:52.761 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:52 np0005634532 systemd-logind[832]: New session 59 of user zuul.
Mar  1 05:26:52 np0005634532 systemd[1]: Started Session 59 of User zuul.
Mar  1 05:26:53 np0005634532 nova_compute[257049]: 2026-03-01 10:26:53.000 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:53 np0005634532 nova_compute[257049]: 2026-03-01 10:26:53.217 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:53 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:53 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:53 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:53.916 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:53 np0005634532 nova_compute[257049]: 2026-03-01 10:26:53.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:53 np0005634532 nova_compute[257049]: 2026-03-01 10:26:53.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:53 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:54 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:54 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:54 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1339: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 821 B/s rd, 0 op/s
Mar  1 05:26:54 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:54 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:54 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:54.763 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:54 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17451 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26914 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26744 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17457 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26926 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26750 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:26:55 np0005634532 nova_compute[257049]: 2026-03-01 10:26:55.916 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:55 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:55 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:55 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:55.919 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:55 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "status"} v 0)
Mar  1 05:26:55 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3129364742' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Mar  1 05:26:56 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1340: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:56 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:56 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:56 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:56.766 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:26:57 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:26:57] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:26:57 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:26:57 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:57.352Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:57 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:57 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:57 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:57.922 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:26:57 np0005634532 nova_compute[257049]: 2026-03-01 10:26:57.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:57 np0005634532 nova_compute[257049]: 2026-03-01 10:26:57.977 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Mar  1 05:26:57 np0005634532 nova_compute[257049]: 2026-03-01 10:26:57.977 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.014 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.014 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.015 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.015 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.015 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.219 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:26:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:26:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/368431087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.475 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:26:58 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1341: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.632 257053 WARNING nova.virt.libvirt.driver [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.633 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4403MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.633 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.633 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Mar  1 05:26:58 np0005634532 ovs-vsctl[295406]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.714 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.715 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Mar  1 05:26:58 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:58 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:26:58 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:26:58.768 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.794 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing inventories for resource provider 018d246d-1e01-4168-9128-598c5501111b _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.814 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating ProviderTree inventory for provider 018d246d-1e01-4168-9128-598c5501111b from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.815 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Updating inventory in ProviderTree for provider 018d246d-1e01-4168-9128-598c5501111b with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.842 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing aggregate associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Mar  1 05:26:58 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:26:58 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.102:0/543283653' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.870 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Refreshing trait associations for resource provider 018d246d-1e01-4168-9128-598c5501111b, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_ACCELERATORS,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_F16C,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE42,HW_CPU_X86_SVM,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_NET_VIF_MODEL_E1000E,HW_CPU_X86_BMI2,HW_CPU_X86_BMI,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_SSSE3,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE2,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NODE,HW_CPU_X86_SSE,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_FMA3,HW_CPU_X86_AMD_SVM,COMPUTE_RESCUE_BFV,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_TRUSTED_CERTS,COMPUTE_STORAGE_BUS_SATA,COMPUTE_GRAPHICS_MODEL_NONE,HW_CPU_X86_AESNI,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_MMX,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_USB,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE4A,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_IDE,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_PCNET _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Mar  1 05:26:58 np0005634532 nova_compute[257049]: 2026-03-01 10:26:58.889 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Mar  1 05:26:58 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:26:58.908Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:26:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:26:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:26:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:58 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:26:59 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:26:59 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:26:59 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "format": "json"} v 0)
Mar  1 05:26:59 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2771457945' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Mar  1 05:26:59 np0005634532 nova_compute[257049]: 2026-03-01 10:26:59.343 257053 DEBUG oslo_concurrency.processutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Mar  1 05:26:59 np0005634532 nova_compute[257049]: 2026-03-01 10:26:59.349 257053 DEBUG nova.compute.provider_tree [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed in ProviderTree for provider: 018d246d-1e01-4168-9128-598c5501111b update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Mar  1 05:26:59 np0005634532 nova_compute[257049]: 2026-03-01 10:26:59.369 257053 DEBUG nova.scheduler.client.report [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Inventory has not changed for provider 018d246d-1e01-4168-9128-598c5501111b based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Mar  1 05:26:59 np0005634532 nova_compute[257049]: 2026-03-01 10:26:59.371 257053 DEBUG nova.compute.resource_tracker [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Mar  1 05:26:59 np0005634532 nova_compute[257049]: 2026-03-01 10:26:59.371 257053 DEBUG oslo_concurrency.lockutils [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Mar  1 05:26:59 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Mar  1 05:26:59 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Mar  1 05:26:59 np0005634532 virtqemud[256058]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Mar  1 05:26:59 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: cache status {prefix=cache status} (starting...)
Mar  1 05:26:59 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:26:59 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:26:59 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:26:59 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:26:59.925 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: client ls {prefix=client ls} (starting...)
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:00 np0005634532 lvm[295762]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Mar  1 05:27:00 np0005634532 lvm[295762]: VG ceph_vg0 finished
Mar  1 05:27:00 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1342: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: damage ls {prefix=damage ls} (starting...)
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17499 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:00 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:27:00 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1043248485' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump loads {prefix=dump loads} (starting...)
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:00 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:00 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:00 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:00.770 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26959 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Mar  1 05:27:00 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:00 np0005634532 nova_compute[257049]: 2026-03-01 10:27:00.976 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:00 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17514 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0)
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1884048816' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26971 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17529 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config log"} v 0)
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2700029782' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26786 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: get subtrees {prefix=get subtrees} (starting...)
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26986 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "report"} v 0)
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17544 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: ops {prefix=ops} (starting...)
Mar  1 05:27:01 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config-key dump"} v 0)
Mar  1 05:27:01 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3449502576' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26801 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26998 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:01 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:01 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:01 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:01.927 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17568 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0)
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2564619874' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26813 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: session ls {prefix=session ls} (starting...)
Mar  1 05:27:02 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa Can't run that command on an inactive MDS!
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1343: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:27:02 np0005634532 ceph-mds[97825]: mds.cephfs.compute-0.qvzeqa asok_command: status {prefix=status} (starting...)
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17577 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd blocklist ls", "format": "json"} v 0)
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='mgr.14694 192.168.122.100:0/2755254563' entity='mgr.compute-0.ebwufc' cmd=[{"prefix": "osd blocklist ls", "format": "json"}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763329368' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27025 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:02 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26825 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:02 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:02 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:02 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:02.772 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:27:02 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2139705419' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2828572412' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27037 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:03 np0005634532 nova_compute[257049]: 2026-03-01 10:27:03.253 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:03 np0005634532 nova_compute[257049]: 2026-03-01 10:27:03.368 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:27:03 np0005634532 nova_compute[257049]: 2026-03-01 10:27:03.368 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:27:03 np0005634532 nova_compute[257049]: 2026-03-01 10:27:03.368 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0)
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1300562901' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214926083' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26843 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17616 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:03 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:27:03.767+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 05:27:03 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1758800144' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 05:27:03 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26855 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:03 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:03 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:27:03 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:03.929 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:27:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:27:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:27:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:03 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:27:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:04 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0)
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2692075978' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat"} v 0)
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2398593295' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Mar  1 05:27:04 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27082 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:04 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:04 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:27:04.301+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "features"} v 0)
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? ' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Mar  1 05:27:04 np0005634532 podman[296423]: 2026-03-01 10:27:04.464807599 +0000 UTC m=+0.147744568 container health_status eba5e7c5bf1cb0694498c3b5d0f9b901dec36f2482ec40aef98fed1e15d9f18d (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20260223, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:bf1f44b6bfa655654f556ae3adca2582892e72550d916a5b0a8dbacb0e210bbe', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.43.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Mar  1 05:27:04 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1344: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0)
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2801939956' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Mar  1 05:27:04 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:04 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:04 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:04.774 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Mar  1 05:27:04 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1347263885' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Mar  1 05:27:04 np0005634532 nova_compute[257049]: 2026-03-01 10:27:04.976 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:27:04 np0005634532 nova_compute[257049]: 2026-03-01 10:27:04.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Mar  1 05:27:04 np0005634532 nova_compute[257049]: 2026-03-01 10:27:04.976 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Mar  1 05:27:04 np0005634532 nova_compute[257049]: 2026-03-01 10:27:04.997 257053 DEBUG nova.compute.manager [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26897 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:05 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: 2026-03-01T10:27:05.106+0000 7fe1142d4640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump"} v 0)
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1020951383' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17661 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27118 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata"} v 0)
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/973811167' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17676 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d025d54960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969781 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969781 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84557824 unmapped: 3481600 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.769716263s of 14.783215523s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84566016 unmapped: 3473408 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969913 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969913 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969322 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84574208 unmapped: 3465216 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.095451355s of 17.105062485s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d02642b4a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d024642000 session 0x55d0269c2d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84582400 unmapped: 3457024 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84590592 unmapped: 3448832 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969190 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84598784 unmapped: 3440640 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 19.799724579s of 19.803125381s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 969322 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84606976 unmapped: 3432448 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970834 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.368650436s of 15.730206490s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ee400 session 0x55d0269c3680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d025aa0780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 970702 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.969473839s of 15.973288536s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84615168 unmapped: 3424256 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972346 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 972346 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971755 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.940380096s of 17.952461243s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84623360 unmapped: 3416064 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84631552 unmapped: 3407872 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84639744 unmapped: 3399680 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84647936 unmapped: 3391488 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84656128 unmapped: 3383296 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d02694fc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d02694e5a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241eec00 session 0x55d02642a960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971623 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 68.266105652s of 68.270225525s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84664320 unmapped: 3375104 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 971755 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84672512 unmapped: 3366912 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84680704 unmapped: 3358720 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974911 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974911 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.691431999s of 13.773061752s, submitted: 4
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974188 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84688896 unmapped: 3350528 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84697088 unmapped: 3342336 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973597 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.385815620s of 11.396687508s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d023758800 session 0x55d024084780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ef800 session 0x55d02694ad20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973465 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973465 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.788563728s of 12.791935921s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 973597 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84705280 unmapped: 3334144 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84713472 unmapped: 3325952 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84721664 unmapped: 3317760 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022cbd400 session 0x55d0252c1e00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.101604462s of 14.110105515s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84729856 unmapped: 3309568 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0232d9000 session 0x55d0257d3c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 974977 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 25.731632233s of 25.737331390s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975109 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84746240 unmapped: 3293184 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976621 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84754432 unmapped: 3284992 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.129507065s of 15.140318871s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84762624 unmapped: 3276800 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84770816 unmapped: 3268608 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84779008 unmapped: 3260416 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0241ee400 session 0x55d0269c3680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d026793c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 975898 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 49.404827118s of 49.407947540s, submitted: 1
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976030 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84787200 unmapped: 3252224 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84795392 unmapped: 3244032 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 977542 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84803584 unmapped: 3235840 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.359548569s of 12.369632721s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 3227648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84811776 unmapped: 3227648 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976951 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84828160 unmapped: 3211264 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84836352 unmapped: 3203072 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread fragmentation_score=0.000029 took=0.000054s
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84844544 unmapped: 3194880 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84852736 unmapped: 3186688 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84860928 unmapped: 3178496 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84869120 unmapped: 3170304 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d0268c0c00 session 0x55d0245f5680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d025992400 session 0x55d02553dc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 976819 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84877312 unmapped: 3162112 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 99.858451843s of 99.866027832s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 3153920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84885504 unmapped: 3153920 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 978463 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979975 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.084491730s of 12.096027374s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979384 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84893696 unmapped: 3145728 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 8728 writes, 34K keys, 8728 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 8728 writes, 1876 syncs, 4.65 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 704 writes, 1104 keys, 704 commit groups, 1.0 writes per commit group, ingest: 0.36 MB, 0.00 MB/s#012Interval WAL: 704 writes, 332 syncs, 2.12 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55d021e81350#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84901888 unmapped: 3137536 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84910080 unmapped: 3129344 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84926464 unmapped: 3112960 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 91.027595520s of 91.035415649s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [1,1,0,1])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 84951040 unmapped: 3088384 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85213184 unmapped: 2826240 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d02599d800 session 0x55d025277860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85303296 unmapped: 2736128 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d022c0b400 session 0x55d02642b860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 ms_handle_reset con 0x55d025992400 session 0x55d025d523c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85311488 unmapped: 2727936 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979252 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 41.413551331s of 42.533706665s, submitted: 343
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85319680 unmapped: 2719744 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979516 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 981028 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.067756653s of 11.079182625s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979846 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 979714 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 heartbeat osd_stat(store_statfs(0x4fc66b000/0x0/0x4ffc00000, data 0xf28e4/0x1a1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 137 handle_osd_map epochs [138,138], i have 137, src has [1,138]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.885664940s of 10.899056435s, submitted: 4
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85336064 unmapped: 2703360 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 138 handle_osd_map epochs [138,139], i have 138, src has [1,139]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 139 heartbeat osd_stat(store_statfs(0x4fc667000/0x0/0x4ffc00000, data 0xf49d0/0x1a4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85360640 unmapped: 2678784 heap: 88039424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 139 handle_osd_map epochs [140,140], i have 139, src has [1,140]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 140 ms_handle_reset con 0x55d025d6a400 session 0x55d02571e1e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1020537 data_alloc: 218103808 data_used: 135168
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85442560 unmapped: 11911168 heap: 97353728 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 140 ms_handle_reset con 0x55d026947400 session 0x55d0244a2000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 140 handle_osd_map epochs [141,141], i have 140, src has [1,141]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85557248 unmapped: 20193280 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 141 heartbeat osd_stat(store_statfs(0x4fb57d000/0x0/0x4ffc00000, data 0x11dad20/0x128d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1107019 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85565440 unmapped: 20185088 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 141 handle_osd_map epochs [141,142], i have 141, src has [1,142]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85581824 unmapped: 20168704 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85590016 unmapped: 20160512 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57b000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1109181 data_alloc: 218103808 data_used: 139264
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 85598208 unmapped: 20152320 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 43.782966614s of 43.987041473s, submitted: 44
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 ms_handle_reset con 0x55d022c0b400 session 0x55d025d52960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 ms_handle_reset con 0x55d025992400 session 0x55d0251dc3c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90136576 unmapped: 15613952 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 heartbeat osd_stat(store_statfs(0x4fb57c000/0x0/0x4ffc00000, data 0x11dccf2/0x1290000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90136576 unmapped: 15613952 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 142 handle_osd_map epochs [143,143], i have 142, src has [1,143]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 90152960 unmapped: 15597568 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 143 handle_osd_map epochs [143,144], i have 143, src has [1,144]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02599d800 session 0x55d0239ba000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025d6a400 session 0x55d0245f54a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d026946000 session 0x55d0257d2f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1136514 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d022c0b400 session 0x55d025aa0d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025992400 session 0x55d025ada5a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02599d800 session 0x55d02642a5a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 heartbeat osd_stat(store_statfs(0x4fb4ed000/0x0/0x4ffc00000, data 0x1268f1e/0x131e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d025d6a400 session 0x55d025d52000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91324416 unmapped: 14426112 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d02678bc00 session 0x55d0269c30e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d0268c0c00 session 0x55d02595c960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d0232d9000 session 0x55d026376f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 ms_handle_reset con 0x55d022c0b400 session 0x55d0267d10e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91127808 unmapped: 14622720 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1138521 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91152384 unmapped: 14598144 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 13803520 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91947008 unmapped: 13803520 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 144 handle_osd_map epochs [145,145], i have 144, src has [1,145]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.072201729s of 11.222368240s, submitted: 37
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4ed000/0x0/0x4ffc00000, data 0x1268f2e/0x131f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4e9000/0x0/0x4ffc00000, data 0x126af00/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146375 data_alloc: 218103808 data_used: 5349376
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fb4e9000/0x0/0x4ffc00000, data 0x126af00/0x1322000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 91963392 unmapped: 13787136 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1146507 data_alloc: 218103808 data_used: 5349376
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 92299264 unmapped: 13451264 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4faf6e000/0x0/0x4ffc00000, data 0x17e0f00/0x1898000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 95428608 unmapped: 10321920 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4faf66000/0x0/0x4ffc00000, data 0x17e6f00/0x189e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x33ef9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195887 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.400364876s of 13.582426071s, submitted: 83
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1197399 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9db2000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 98402304 unmapped: 7348224 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189384 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1189252 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97558528 unmapped: 8192000 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d02694bc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7800 session 0x55d02694e5a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0257d2000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 16.670677185s of 16.765102386s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02396e780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97566720 unmapped: 8183808 heap: 105750528 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d025d554a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025aa01e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97828864 unmapped: 9543680 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7400 session 0x55d02694b860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d02694b2c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233037 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97837056 unmapped: 9535488 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9821000/0x0/0x4ffc00000, data 0x1d92f62/0x1e4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9821000/0x0/0x4ffc00000, data 0x1d92f62/0x1e4b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 9469952 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97902592 unmapped: 9469952 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1233037 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 9461760 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97910784 unmapped: 9461760 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0252c0b40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 97976320 unmapped: 9396224 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267161 data_alloc: 234881024 data_used: 10264576
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101949440 unmapped: 5423104 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101949440 unmapped: 5423104 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1267161 data_alloc: 234881024 data_used: 10264576
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101982208 unmapped: 5390336 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102014976 unmapped: 5357568 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97fd000/0x0/0x4ffc00000, data 0x1db6f62/0x1e6f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102047744 unmapped: 5324800 heap: 107372544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 21.693861008s of 21.804162979s, submitted: 36
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1306909 data_alloc: 234881024 data_used: 10526720
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104513536 unmapped: 3915776 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758800 session 0x55d02694ed20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0267961e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1318345 data_alloc: 234881024 data_used: 11190272
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f935c000/0x0/0x4ffc00000, data 0x2256f62/0x230f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 104054784 unmapped: 4374528 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.011808395s of 10.120314598s, submitted: 48
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7c00 session 0x55d0239b81e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0245f4780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1315489 data_alloc: 234881024 data_used: 11194368
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7938048 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d026230f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100491264 unmapped: 7938048 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195521 data_alloc: 218103808 data_used: 5496832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100499456 unmapped: 7929856 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dbf000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d02642a3c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d02599d800 session 0x55d026792f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 100483072 unmapped: 7946240 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758800 session 0x55d02642a960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc0000/0x0/0x4ffc00000, data 0x17f4f00/0x18ac000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1142380 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.356325150s of 14.717723846s, submitted: 57
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141789 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025f07400 session 0x55d02595de00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0267d0000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141657 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0245f5860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d0257d34a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d02599d800 session 0x55d02694fc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1141657 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99713024 unmapped: 8716288 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.380292892s of 11.387769699s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d02694f680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0244a3a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0257d3680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d026377680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99721216 unmapped: 8708096 heap: 108429312 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025992400 session 0x55d0266beb40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0241bb860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0239b85a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0239ba1e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0241ba780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025f07400 session 0x55d02565e1e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0252765a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1199859 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9deb000/0x0/0x4ffc00000, data 0x17c9f00/0x1881000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d025d54780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99917824 unmapped: 11673600 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0245f43c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025d54960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12230656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99360768 unmapped: 12230656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 99082240 unmapped: 12509184 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234283 data_alloc: 234881024 data_used: 9641984
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.860521317s of 11.145645142s, submitted: 27
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234567 data_alloc: 234881024 data_used: 9650176
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101040128 unmapped: 10551296 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dc7000/0x0/0x4ffc00000, data 0x17edf00/0x18a5000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,2])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105127936 unmapped: 6463488 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d026793680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287077 data_alloc: 234881024 data_used: 10244096
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f97ad000/0x0/0x4ffc00000, data 0x1e01f00/0x1eb9000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105357312 unmapped: 6234112 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106135552 unmapped: 5455872 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.006898880s of 10.291774750s, submitted: 88
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9710000/0x0/0x4ffc00000, data 0x1e95f00/0x1f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 5423104 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106168320 unmapped: 5423104 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294045 data_alloc: 234881024 data_used: 10051584
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105259008 unmapped: 6332416 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f971f000/0x0/0x4ffc00000, data 0x1e95f00/0x1f4d000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96fe000/0x0/0x4ffc00000, data 0x1eb6f00/0x1f6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1294969 data_alloc: 234881024 data_used: 10051584
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96fe000/0x0/0x4ffc00000, data 0x1eb6f00/0x1f6e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105398272 unmapped: 6193152 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.323903084s of 11.456132889s, submitted: 6
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1296861 data_alloc: 234881024 data_used: 10051584
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f8000/0x0/0x4ffc00000, data 0x1ebcf00/0x1f74000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1297870 data_alloc: 234881024 data_used: 10051584
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105504768 unmapped: 6086656 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f5000/0x0/0x4ffc00000, data 0x1ebff00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105512960 unmapped: 6078464 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1298174 data_alloc: 234881024 data_used: 10059776
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.626700401s of 11.654762268s, submitted: 7
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d024243a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d0257d34a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 105521152 unmapped: 6070272 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96f5000/0x0/0x4ffc00000, data 0x1ebff00/0x1f77000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d02553de00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 102105088 unmapped: 9486336 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101621760 unmapped: 9969664 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d024084780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1158004 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101580800 unmapped: 10010624 heap: 111591424 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 24.700908661s of 24.738805771s, submitted: 16
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0266beb40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179146 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d026993c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101588992 unmapped: 12173312 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026992960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1179278 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa141000/0x0/0x4ffc00000, data 0x1474ef0/0x152b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0251dd0e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101613568 unmapped: 12148736 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d0239d4d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101408768 unmapped: 12353536 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1203991 data_alloc: 218103808 data_used: 7491584
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.616955757s of 11.661013603s, submitted: 10
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa11c000/0x0/0x4ffc00000, data 0x1498f13/0x1550000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202761 data_alloc: 218103808 data_used: 7495680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 101457920 unmapped: 12304384 heap: 113762304 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107839488 unmapped: 7200768 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 7340032 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107700224 unmapped: 7340032 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106496000 unmapped: 8544256 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1282211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106504192 unmapped: 8536064 heap: 115040256 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 23.128217697s of 23.291627884s, submitted: 63
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a000 session 0x55d0262303c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9115000/0x0/0x4ffc00000, data 0x249ff13/0x2557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0245f45a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327211 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9115000/0x0/0x4ffc00000, data 0x249ff13/0x2557000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d85860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106864640 unmapped: 15523840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0266beb40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106872832 unmapped: 15515648 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0266bfe00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d7000 session 0x55d024084780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 15360000 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107028480 unmapped: 15360000 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110460928 unmapped: 11927552 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365834 data_alloc: 234881024 data_used: 12726272
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.282575607s of 15.351916313s, submitted: 21
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1365966 data_alloc: 234881024 data_used: 12726272
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110493696 unmapped: 11894784 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110526464 unmapped: 11862016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 11763712 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f90f0000/0x0/0x4ffc00000, data 0x24c3f23/0x257c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111894528 unmapped: 10493952 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110886912 unmapped: 11501568 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1394822 data_alloc: 234881024 data_used: 13090816
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1395895 data_alloc: 234881024 data_used: 13094912
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110960640 unmapped: 11427840 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8e4f000/0x0/0x4ffc00000, data 0x2763f23/0x281c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02595cd20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.433647156s of 12.555562019s, submitted: 42
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0252c1a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d025277860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6c00 session 0x55d025d854a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110968832 unmapped: 11419648 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a400 session 0x55d025adb4a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1291666 data_alloc: 218103808 data_used: 7958528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0239ba000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d025aa1a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108773376 unmapped: 13615104 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d54b40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c8000/0x0/0x4ffc00000, data 0x1eecf13/0x1fa4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175042 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.597115517s of 10.756122589s, submitted: 43
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1175174 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 15425536 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177906 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1177315 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026230960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6c00 session 0x55d0239bb4a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0266be1e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0257d3c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.379542351s of 12.390668869s, submitted: 4
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025725a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0244a3680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0a800 session 0x55d025aa1e00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d026376d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d026376780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106430464 unmapped: 15958016 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1220087 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026377e00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d0239ced20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd2000/0x0/0x4ffc00000, data 0x17e2f00/0x189a000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0ac00 session 0x55d0257d3a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02595c3c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106438656 unmapped: 15949824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265794 data_alloc: 234881024 data_used: 11087872
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0267972c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0x17e2f33/0x189c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1265794 data_alloc: 234881024 data_used: 11087872
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9dd0000/0x0/0x4ffc00000, data 0x17e2f33/0x189c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108765184 unmapped: 13623296 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 17.460260391s of 17.508676529s, submitted: 11
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293816 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110329856 unmapped: 12058624 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293948 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1295460 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 12.885341644s of 12.964467049s, submitted: 25
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293942 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1293810 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d02595d680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d02396f4a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110338048 unmapped: 12050432 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.902921677s of 13.914952278s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b400 session 0x55d025aa1c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327002 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1327002 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0241bb680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d0241bb860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110567424 unmapped: 11821056 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110690304 unmapped: 11698176 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1345374 data_alloc: 234881024 data_used: 13758464
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.468021393s of 13.508556366s, submitted: 10
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9698000/0x0/0x4ffc00000, data 0x1f1af33/0x1fd4000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0267974a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0268c0c00 session 0x55d026796000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1346214 data_alloc: 234881024 data_used: 13758464
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112058368 unmapped: 10330112 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115392512 unmapped: 6995968 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115752960 unmapped: 6635520 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d07000/0x0/0x4ffc00000, data 0x28abf33/0x2965000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1426818 data_alloc: 234881024 data_used: 14209024
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [0,0,1])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1428330 data_alloc: 234881024 data_used: 14209024
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf6000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115982336 unmapped: 6406144 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.083793640s of 15.273586273s, submitted: 71
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115990528 unmapped: 6397952 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8cf7000/0x0/0x4ffc00000, data 0x28bbf33/0x2975000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1422887 data_alloc: 234881024 data_used: 14209024
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0251dde00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115654656 unmapped: 6733824 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02694a960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1305051 data_alloc: 234881024 data_used: 11091968
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9a38000/0x0/0x4ffc00000, data 0x1b7af33/0x1c34000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112517120 unmapped: 9871360 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d025d85a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026377680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 9.733242989s of 10.052184105s, submitted: 14
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0239d5a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195886 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195754 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1195754 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108257280 unmapped: 14131200 heap: 122388480 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.950378418s of 16.011283875s, submitted: 18
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241ee400 session 0x55d02396e000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0251dc960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025adb680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0267d0b40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02642bc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4fa3d2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x458f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252353 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1252353 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109346816 unmapped: 17334272 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 109674496 unmapped: 17006592 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288205 data_alloc: 234881024 data_used: 10080256
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9955000/0x0/0x4ffc00000, data 0x1850ef0/0x1907000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110026752 unmapped: 16654336 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.521683693s of 13.599659920s, submitted: 28
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0269c2f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d026797c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1202239 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106708992 unmapped: 19972096 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.693157196s of 13.755032539s, submitted: 23
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108511232 unmapped: 18169856 heap: 126681088 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d025724d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d025d84780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0269c2780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d026230960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02694e780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978c000/0x0/0x4ffc00000, data 0x1a19ef0/0x1ad0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0251dc780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1270808 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0245f52c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d025adbc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d025d85e00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106930176 unmapped: 22904832 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d026231a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0239ba1e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 22872064 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x1a19f13/0x1ad1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 106962944 unmapped: 22872064 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 107487232 unmapped: 22347776 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1321441 data_alloc: 234881024 data_used: 12177408
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f978b000/0x0/0x4ffc00000, data 0x1a19f13/0x1ad1000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110624768 unmapped: 19210240 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.020494461s of 11.124375343s, submitted: 30
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d02595cd20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02545b2c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210137 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 18.238668442s of 18.348218918s, submitted: 32
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210005 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d026797680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0266bfc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108404736 unmapped: 21430272 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 2837 syncs, 3.86 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2214 writes, 6930 keys, 2214 commit groups, 1.0 writes per commit group, ingest: 6.65 MB, 0.01 MB/s#012Interval WAL: 2214 writes, 961 syncs, 2.30 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023a0ec00 session 0x55d0262310e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d022c0b400 session 0x55d0257d34a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0267965a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1210005 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0241ba3c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025d6a400 session 0x55d0257d2960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108412928 unmapped: 21422080 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212211 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.750989914s of 14.762865067s, submitted: 3
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1212343 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1213519 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108421120 unmapped: 21413888 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 108429312 unmapped: 21405696 heap: 129835008 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1214127 data_alloc: 218103808 data_used: 4816896
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.393978119s of 14.400218010s, submitted: 2
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fbd000/0x0/0x4ffc00000, data 0x11e8ef0/0x129f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110632960 unmapped: 22339584 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96c6000/0x0/0x4ffc00000, data 0x1adfef0/0x1b96000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110698496 unmapped: 22274048 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1292579 data_alloc: 218103808 data_used: 5001216
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111304704 unmapped: 21667840 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9653000/0x0/0x4ffc00000, data 0x1b44ef0/0x1bfb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1286731 data_alloc: 218103808 data_used: 5001216
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 13.739144325s of 13.909265518s, submitted: 66
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965e000/0x0/0x4ffc00000, data 0x1b47ef0/0x1bfe000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287019 data_alloc: 218103808 data_used: 5001216
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d02565f2c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f965c000/0x0/0x4ffc00000, data 0x1b49ef0/0x1c00000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110428160 unmapped: 22544384 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d025d53e00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1217337 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110444544 unmapped: 22528000 heap: 132972544 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025d523c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0251dc780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0251dc5a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024642000 session 0x55d0239d5a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 27.979179382s of 28.011842728s, submitted: 9
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119152640 unmapped: 17498112 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0232d9000 session 0x55d0262303c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0251dd2c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0244a21e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0241832c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d0239b85a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96d0000/0x0/0x4ffc00000, data 0x1ad4f00/0x1b8c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 25927680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232400 session 0x55d0252770e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d024084960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110723072 unmapped: 25927680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0245f4000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110739456 unmapped: 25911296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0241eec00 session 0x55d026796780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1287020 data_alloc: 218103808 data_used: 4796416
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0245f4d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d0245f5860
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b000 session 0x55d0245f43c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25731072 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 110919680 unmapped: 25731072 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20652032 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115998720 unmapped: 20652032 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20643840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356411 data_alloc: 234881024 data_used: 14090240
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116006912 unmapped: 20643840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116056064 unmapped: 20594688 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.106419563s of 10.318478584s, submitted: 55
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114933760 unmapped: 21716992 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1356195 data_alloc: 234881024 data_used: 14094336
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f96aa000/0x0/0x4ffc00000, data 0x1af8f32/0x1bb2000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 115105792 unmapped: 21544960 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d025aa01e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d026405000 session 0x55d025aa10e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 120381440 unmapped: 16269312 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0245f50e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d025246d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263d6000 session 0x55d0252c0f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119209984 unmapped: 17440768 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119316480 unmapped: 17334272 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23d4f32/0x248e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1430823 data_alloc: 234881024 data_used: 14409728
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dce000/0x0/0x4ffc00000, data 0x23d4f32/0x248e000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119324672 unmapped: 17326080 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 10.081660271s of 11.304382324s, submitted: 383
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17317888 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 119332864 unmapped: 17317888 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d026230f00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434560 data_alloc: 234881024 data_used: 14409728
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 18194432 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118456320 unmapped: 18194432 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434712 data_alloc: 234881024 data_used: 14413824
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118464512 unmapped: 18186240 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118472704 unmapped: 18178048 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1434712 data_alloc: 234881024 data_used: 14413824
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 18169856 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118480896 unmapped: 18169856 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8da9000/0x0/0x4ffc00000, data 0x23f8f55/0x24b3000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 14.167691231s of 14.192552567s, submitted: 6
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445462 data_alloc: 234881024 data_used: 14524416
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118562816 unmapped: 18087936 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d023758c00 session 0x55d02595d0e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445462 data_alloc: 234881024 data_used: 14524416
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118571008 unmapped: 18079744 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 11.049817085s of 11.065047264s, submitted: 6
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9d000/0x0/0x4ffc00000, data 0x2404f55/0x24bf000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118611968 unmapped: 18038784 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 18030592 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1445766 data_alloc: 234881024 data_used: 14524416
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8d9b000/0x0/0x4ffc00000, data 0x2405f55/0x24c0000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118620160 unmapped: 18030592 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0263f0000 session 0x55d02571e960
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0252765a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118636544 unmapped: 18014208 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d02694ba40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118661120 unmapped: 17989632 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dcc000/0x0/0x4ffc00000, data 0x23d5f32/0x248f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1433816 data_alloc: 234881024 data_used: 14409728
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d02396f680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0239b0d20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 118669312 unmapped: 17981440 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f8dcd000/0x0/0x4ffc00000, data 0x23d5f32/0x248f000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d0267930e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1234913 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111632384 unmapped: 25018368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc2000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0267921e0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d0267925a0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d026792780
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d026793c20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 31.304897308s of 31.492788315s, submitted: 55
Mar  1 05:27:05 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27136 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114745344 unmapped: 21905408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d0239ced20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261627 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c41000/0x0/0x4ffc00000, data 0x1564ef0/0x161b000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111599616 unmapped: 25051136 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1261627 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d02595c3c0
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d02595c000
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0bc00 session 0x55d0251ddc20
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d0233a3400 session 0x55d0239d5a40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111607808 unmapped: 25042944 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1263441 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 8470528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111673344 unmapped: 24977408 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1288369 data_alloc: 218103808 data_used: 8470528
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 20.375757217s of 20.401163101s, submitted: 4
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9c40000/0x0/0x4ffc00000, data 0x1564f00/0x161c000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116752384 unmapped: 19898368 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1329917 data_alloc: 218103808 data_used: 8667136
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116858880 unmapped: 19791872 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9764000/0x0/0x4ffc00000, data 0x1a40f00/0x1af8000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328957 data_alloc: 218103808 data_used: 8667136
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9761000/0x0/0x4ffc00000, data 0x1a43f00/0x1afb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 116867072 unmapped: 19783680 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024643800 session 0x55d0241bab40
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d024232000 session 0x55d026793680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1328957 data_alloc: 218103808 data_used: 8667136
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 15.079201698s of 15.230097771s, submitted: 55
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: mgrc ms_handle_reset ms_handle_reset con 0x55d02599dc00
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/2106645066
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/2106645066,v1:192.168.122.100:6801/2106645066]
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: mgrc handle_mgr_configure stats_period=5
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 ms_handle_reset con 0x55d025a0b800 session 0x55d0241bb680
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9761000/0x0/0x4ffc00000, data 0x1a43f00/0x1afb000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 22839296 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 22831104 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 22822912 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 22814720 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 22806528 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 22798336 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113958912 unmapped: 22691840 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 22478848 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 22364160 heap: 136650752 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'log dump' '{prefix=log dump}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114286592 unmapped: 33406976 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf dump' '{prefix=perf dump}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf schema' '{prefix=perf schema}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 33521664 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114171904 unmapped: 33521664 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114180096 unmapped: 33513472 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114188288 unmapped: 33505280 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114196480 unmapped: 33497088 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114204672 unmapped: 33488896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114212864 unmapped: 33480704 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114221056 unmapped: 33472512 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114229248 unmapped: 33464320 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114237440 unmapped: 33456128 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114245632 unmapped: 33447936 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 114253824 unmapped: 33439744 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113745920 unmapped: 33947648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113754112 unmapped: 33939456 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113762304 unmapped: 33931264 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113770496 unmapped: 33923072 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113778688 unmapped: 33914880 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113786880 unmapped: 33906688 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113795072 unmapped: 33898496 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113803264 unmapped: 33890304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113811456 unmapped: 33882112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113819648 unmapped: 33873920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113827840 unmapped: 33865728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113836032 unmapped: 33857536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113844224 unmapped: 33849344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 12K writes, 44K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.01 MB/s#012Cumulative WAL: 12K writes, 3383 syncs, 3.59 writes per sync, written: 0.03 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1197 writes, 3187 keys, 1197 commit groups, 1.0 writes per commit group, ingest: 2.79 MB, 0.00 MB/s#012Interval WAL: 1197 writes, 546 syncs, 2.19 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113852416 unmapped: 33841152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113860608 unmapped: 33832960 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113868800 unmapped: 33824768 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113876992 unmapped: 33816576 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113885184 unmapped: 33808384 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113893376 unmapped: 33800192 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore(/var/lib/ceph/osd/ceph-0) _kv_sync_thread utilization: idle 495.740875244s of 495.770294189s, submitted: 8
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113917952 unmapped: 33775616 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 113410048 unmapped: 34283520 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111616000 unmapped: 36077568 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111697920 unmapped: 35995648 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111755264 unmapped: 35938304 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111763456 unmapped: 35930112 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111771648 unmapped: 35921920 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111779840 unmapped: 35913728 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111788032 unmapped: 35905536 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111796224 unmapped: 35897344 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: bluestore.MempoolThread _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 1240718 data_alloc: 218103808 data_used: 4792320
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111804416 unmapped: 35889152 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 111943680 unmapped: 35749888 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}'
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: osd.0 145 heartbeat osd_stat(store_statfs(0x4f9fc3000/0x0/0x4ffc00000, data 0x11e2ef0/0x1299000, compress 0x0/0x0/0x0, omap 0x63b, meta 0x499f9c5), peers [1,2] op hist [])
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112074752 unmapped: 35618816 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: prioritycache tune_memory target: 4294967296 mapped: 112156672 unmapped: 35536896 heap: 147693568 old mem: 2845415832 new mem: 2845415832
Mar  1 05:27:05 np0005634532 ceph-osd[84309]: do_command 'log dump' '{prefix=log dump}'
Mar  1 05:27:05 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:05 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:05 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:05.932 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls"} v 0)
Mar  1 05:27:05 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4111213359' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Mar  1 05:27:06 np0005634532 nova_compute[257049]: 2026-03-01 10:27:06.032 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17691 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27148 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26930 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 rsyslogd[1019]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17706 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services"} v 0)
Mar  1 05:27:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3559391222' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1345: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17712 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27163 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:06 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:06 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:06.776 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17718 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions"} v 0)
Mar  1 05:27:06 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1356818081' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27184 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:06 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26954 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-mgr-compute-0-ebwufc[76130]: ::ffff:192.168.122.100 - - [01/Mar/2026:10:27:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: [prometheus INFO cherrypy.access.140604152982112] ::ffff:192.168.122.100 - - [01/Mar/2026:10:27:07] "GET /metrics HTTP/1.1" 200 48458 "" "Prometheus/2.51.0"
Mar  1 05:27:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17733 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27196 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mon stat"} v 0)
Mar  1 05:27:07 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/810552045' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26966 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:27:07.353Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded; ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": context deadline exceeded"
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17751 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27211 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26972 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:07 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:07 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:07 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:07.934 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27220 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17766 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26984 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "node ls"} v 0)
Mar  1 05:27:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2802151638' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Mar  1 05:27:08 np0005634532 nova_compute[257049]: 2026-03-01 10:27:08.254 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17781 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27235 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1346: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.26999 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush class ls"} v 0)
Mar  1 05:27:08 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4043413756' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Mar  1 05:27:08 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:08 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:08 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:08.778 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17799 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27253 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27011 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:27:08.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[1] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout"
Mar  1 05:27:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:27:08.909Z caller=notify.go:732 level=warn component=dispatcher receiver=ceph-dashboard integration=webhook[2] msg="Notify attempt failed, will retry later" attempts=1 err="Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:27:08 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-alertmanager-compute-0[106082]: ts=2026-03-01T10:27:08.909Z caller=dispatch.go:352 level=error component=dispatcher msg="Notify for alerts failed" num_alerts=1 err="ceph-dashboard/webhook[1]: notify retry canceled after 2 attempts: Post \"http://compute-1.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.101:8443: i/o timeout; ceph-dashboard/webhook[2]: notify retry canceled after 2 attempts: Post \"http://compute-2.ctlplane.example.com:8443/api/prometheus_receiver\": dial tcp 192.168.122.102:8443: i/o timeout"
Mar  1 05:27:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:27:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:08 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:27:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:27:09 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:09 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush dump"} v 0)
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1668100349' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17817 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27268 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27026 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush rule ls"} v 0)
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140649110' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0)
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3687502375' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27286 -' entity='client.admin' cmd=[{"prefix": "balancer status detail", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0)
Mar  1 05:27:09 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2262933070' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Mar  1 05:27:09 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:09 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:27:09 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:09.936 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0)
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/301439868' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Mar  1 05:27:10 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27041 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0)
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/63042003' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Mar  1 05:27:10 np0005634532 podman[297342]: 2026-03-01 10:27:10.34712892 +0000 UTC m=+0.044486519 container health_status 1df4dec302d8b40bce93714986cfc892519e8a31436399020d0034ae17ac64d5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20260223, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=8419493e1fd846703d277695e03fc5eb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.43.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '08b9467e1b7e95537191fb2fa6825d176e65d7d6be048e9a0925db064969bbde-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-a8f23d959ad62db400da3a2134febeb73a0fa0993af932b5f69b5b5a4feae954-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:a5baae9e21dffd42c66f546b0b62f95c6af9f409d05e01e7483de42d5dd2a382', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Mar  1 05:27:10 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1347: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0)
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/834627843' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd erasure-code-profile ls"} v 0)
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3676689302' entity='client.admin' cmd=[{"prefix": "osd erasure-code-profile ls"}]: dispatch
Mar  1 05:27:10 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:10 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:10 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:10.780 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0)
Mar  1 05:27:10 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/702032692' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Mar  1 05:27:10 np0005634532 nova_compute[257049]: 2026-03-01 10:27:10.993 257053 DEBUG oslo_service.periodic_task [None req-34802ce0-7a12-4622-b1c5-fc2ff780bc32 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Mar  1 05:27:11 np0005634532 nova_compute[257049]: 2026-03-01 10:27:11.042 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd metadata"} v 0)
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266067162' entity='client.admin' cmd=[{"prefix": "osd metadata"}]: dispatch
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0)
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4018088772' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd utilization"} v 0)
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/37408312' entity='client.admin' cmd=[{"prefix": "osd utilization"}]: dispatch
Mar  1 05:27:11 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17916 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr stat", "format": "json-pretty"} v 0)
Mar  1 05:27:11 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/873827295' entity='client.admin' cmd=[{"prefix": "mgr stat", "format": "json-pretty"}]: dispatch
Mar  1 05:27:11 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:11 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.001000025s ======
Mar  1 05:27:11 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:11.939 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.001000025s
Mar  1 05:27:11 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27367 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:12 np0005634532 systemd[1]: Starting Hostname Service...
Mar  1 05:27:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader).osd e145 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Mar  1 05:27:12 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "mgr versions", "format": "json-pretty"} v 0)
Mar  1 05:27:12 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1202902717' entity='client.admin' cmd=[{"prefix": "mgr versions", "format": "json-pretty"}]: dispatch
Mar  1 05:27:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17937 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:12 np0005634532 systemd[1]: Started Hostname Service.
Mar  1 05:27:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27382 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:12 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1348: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
Mar  1 05:27:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17946 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:12 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27388 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:12 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:12 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:12 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:12.782 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17955 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27400 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "quorum_status"} v 0)
Mar  1 05:27:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2499197986' entity='client.admin' cmd=[{"prefix": "quorum_status"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27161 -' entity='client.admin' cmd=[{"prefix": "telemetry channel ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:13 np0005634532 nova_compute[257049]: 2026-03-01 10:27:13.313 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17967 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27412 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "versions"} v 0)
Mar  1 05:27:13 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521429856' entity='client.admin' cmd=[{"prefix": "versions"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27170 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27179 -' entity='client.admin' cmd=[{"prefix": "telemetry collection ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17982 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27424 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:13 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:13 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:13 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:13.943 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :NFS Server Now IN GRACE, duration 90
Mar  1 05:27:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_start_grace :STATE :EVENT :grace reload client info completed from backend
Mar  1 05:27:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:13 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] nfs_try_lift_grace :STATE :EVENT :check grace:reclaim complete(0) clid count(0)
Mar  1 05:27:14 np0005634532 ceph-437b1e74-f995-5d64-af1d-257ce01d77ab-nfs-cephfs-2-0-compute-0-ljexyw[272836]: 01/03/2026 10:27:14 : epoch 69a4110b : compute-0 : ganesha.nfsd-2[main] rados_cluster_grace_enforcing :CLIENT ID :EVENT :rados_cluster_grace_enforcing: ret=-45
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "health", "detail": "detail", "format": "json-pretty"} v 0)
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/55003513' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail", "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27185 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27445 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.17997 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "osd tree", "format": "json-pretty"} v 0)
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1329121319' entity='client.admin' cmd=[{"prefix": "osd tree", "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27206 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27457 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1349: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 767 B/s rd, 0 op/s
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.18021 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:27:14 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:14 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:14 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.100 - anonymous [01/Mar/2026:10:27:14.784 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27212 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27469 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:14 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.18045 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:15 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27230 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:15 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "config dump"} v 0)
Mar  1 05:27:15 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1801814641' entity='client.admin' cmd=[{"prefix": "config dump"}]: dispatch
Mar  1 05:27:15 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27272 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:15 np0005634532 radosgw[91037]: ====== starting new request req=0x7f87ee2455d0 =====
Mar  1 05:27:15 np0005634532 radosgw[91037]: ====== req done req=0x7f87ee2455d0 op status=0 http_status=200 latency=0.000000000s ======
Mar  1 05:27:15 np0005634532 radosgw[91037]: beast: 0x7f87ee2455d0: 192.168.122.102 - anonymous [01/Mar/2026:10:27:15.945 +0000] "HEAD / HTTP/1.0" 200 0 - - - latency=0.000000000s
Mar  1 05:27:15 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.18114 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27556 -' entity='client.admin' cmd=[{"prefix": "device ls", "target": ["mon-mgr", ""]}]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='mon_status' args=[]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=mon_status args=[]: finished
Mar  1 05:27:16 np0005634532 nova_compute[257049]: 2026-03-01 10:27:16.077 257053 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 24 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Mar  1 05:27:16 np0005634532 ceph-mgr[76134]: log_channel(audit) log [DBG] : from='client.27284 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd='sessions' args=[]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='admin socket' entity='admin socket' cmd=sessions args=[]: finished
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: mon.compute-0@0(leader) e3 handle_command mon_command({"prefix": "df", "detail": "detail"} v 0)
Mar  1 05:27:16 np0005634532 ceph-mon[75825]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3342838585' entity='client.admin' cmd=[{"prefix": "df", "detail": "detail"}]: dispatch
Mar  1 05:27:16 np0005634532 ceph-mgr[76134]: log_channel(cluster) log [DBG] : pgmap v1350: 353 pgs: 353 active+clean; 41 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 511 B/s rd, 0 op/s
